Flask Application Skeleton Example

A friend called me the other day wanting to learn more about putting together a Flask application. After brewing a good Catimor coffee variety from Dominican Republic I decided to get a sample Flask code for starters and share it with everybody.

This Flask application includes:

  1. Importing the Flask class from the flask module.
  2. Creating a Flask application instance with app = Flask(__name__).
  3. Defining a route / using the @app.route() decorator and a corresponding view function (index()) that returns a simple message.
  4. Running the Flask application with app.run().

You can save this code to a Python file (e.g., app.py) and run it using Python. The Flask development server will start, and you can access the application in your web browser at http://localhost:5000/.

from flask import Flask
# Create a Flask application instance
app = Flask(__name__)
# Define a route and its corresponding view function
@app.route('/')
def index():
    return 'Hello, World!'
# Run the Flask application
if __name__ == '__main__':
    app.run(debug=True)

As usual, you can contact me if you have any questions or want to collaborate.

Docker CLI basic commands

Docker is like a virtual box that lets you put your app inside and run it without worrying about messing up your computer. It’s like having a bunch of separate little rooms where you can run different apps at the same time without them bothering each other. Each room (or container) is lightweight and has everything your app needs to work, so you don’t have to worry about what’s already on your computer. Plus, you can easily share these containers with others, making sure everyone has the same setup.

Install Docker

Get the latest version from Docker official site.

Some sample projects

https://github.com/docker/awesome-compose

Images

#Build an Image from a Dockerfile
docker build -t
#Build an Image from a Dockerfile without the cache
docker build -t . –no-cache
#List local images
docker images
#Delete an Image
docker rmi
#Remove all unused images
docker image prune

Docker hub

Service platform provided by Docker to store and share container images.

#Login into Docker
docker login -u <username>
#Publish an image to Docker Hub
docker push <username>/<image_name>
#Search Hub for an image
docker search <image_name>
#Pull an image from a Docker Hub
docker pull <image_name>

Help commands

#Start the docker daemon
docker -d
#Get help with Docker. Can also use –help on all subcommands
docker --help

This one is very cool. It tells you all details about your system:

#Display system-wide information
docker info

Containers

A container is like a special environment created from an image. It always behaves consistently, no matter where it’s running. Containers keep your software separate from its surroundings, making sure it works the same way whether you’re developing, testing, or running it live.

#Create and run a container from an image, with a custom name:
docker run --name <container_name> <image_name>
#Run a container with and publish a container’s port(s) to the host.
docker run -p <host_port>:<container_port> <image_name>
#Run a container in the background
docker run -d <image_name>
#Start or stop an existing container:
docker start|stop <container_name> (or <container-id>)
#Remove a stopped container:
docker rm <container_name>
#Open a shell inside a running container:
docker exec -it <container_name> sh
#Fetch and follow the logs of a container:
docker logs -f <container_name>
#To inspect a running container:
docker inspect <container_name> (or <container_id>)
#To list currently running containers:
docker ps
#List all docker containers (running and stopped):
docker ps --all
#View resource usage stats
docker container stats

I like to keep these commands handy. Sometimes you tend to forget. Contact me if you have any questions or want to collaborate.

How to check users group in Linux

After I got back from my trip to Puebla, Mexico to taste and learn about Pulque. My friend called me looking to find out how to check users group in Linux. There are some steps I took to find users and groups in Linux.

Steps to check users on a Linux server

Use the cat /etc/passwd command to view the contents of the /etc/passwd file, which contains information about all the user accounts on the system.

Alternatively, you can use the getent passwd command to query the system’s user database and list all the users.

To get a count of the total number of users, you can pipe the output of the cat /etc/passwd or getent passwd commands to the wc -l command to count the number of lines, which corresponds to the number of users.

If you only want to list the normal user accounts (excluding system accounts), you can use the getent passwd {1000..60000} command to list users with UIDs (User IDs) in the typical range for normal user accounts.

You can also use the awk command to extract just the usernames from the /etc/passwd file, for example: awk -F':' '{ print $1 }' /etc/passwd

How to check a user’s group?

You can use the groups command:

groups

This will list all the groups the current user is a member of.

To check the groups for a different user, use the following syntax:

groups <username>

Use the id command to get more details about a user:

id <username>

This will show the user’s primary group, as well as any secondary groups they belong to.

Another option is to use the getent command to query the system’s user database and list the groups a user belongs to:

getent group | grep <username>

The above will show all groups the user is a member of. Usually will highlight the username.

If you want to see all the users that belong to a specific group, you can use the following command:

getent group <groupname> | cut -d: -f4

To check users group Linux is quite easy, but if you don’t do this on a daily basis is easy to forget. This is my case. You can always contact me if you have any questions or want to collaborate. Thanks!

Connect to Exchange Online using PowerShell

This one was a new experience for me. After brewing a good cup of Caturra variety coffee at home, a friend contacted me wanting to connect to Exchange online using PowerShell. This was due to some strange email activity with one of her users. Connecting to Exchange Online using PowerShell allows administrators to manage Exchange Online settings, mailboxes, and other features remotely.

First you need to check if the Exchange Online module is installed:

Get-Module -ListAvailable -Name ExchangeOnlineManagement

If the module is not installed, open a Windows Powershell and run as Administrator:

Install-Module -Name ExchangeOnlineManagement -Force

Follow the prompts. Yes, you can install from untrusted repositories. Press Y to confirm. If you already have the module installed and want to update it:

Update-Module ExchangeOnlineManagement

Now, connect to Exchange online:

Connect-ExchangeOnline

Enter your credentials in the pop up window. Usually using MFA (Multi-factor Authentication).

After you connect you can run the desired command against any inbox. For example we wanted to check all rules on a shared inbox:

get-inboxrule -mailbox sharedbox@hola.com  | fl 

After you’re done, you can disconnect:

Disconnect-ExchangeOnline -Confirm:$False

Remember to contact me if you have any questions or like to collaborate. Thanks!

Configure log rotation for Postfix on Linux

A client running a postfix server called me the other day wanting to configure log rotation for Postfix on Linux. This was a Redhat box and I didn’t have much experience configuring log rotation. After my trip to Coyoacan in Mexico City I was very relaxed and started putting things together. Below is what I found.

To configure log rotation for Postfix on a Linux server, you can create a custom logrotate configuration file specifically for Postfix log files. Postfix logs are usually located at /var/log/maillog, /var/log/mail.log or something like that. You can create a logrotate configuration file at /etc/logrotate.d/postfix Here’s a sample file:

/var/log/maillog
{
daily
rotate 14
compress
dateext
missingok
notifempty
copytruncate
}

Explanation log rotation

  • rotate 14: Keep up to 14 rotated log files.
  • daily: Rotate the log files daily.
  • missingok: Do not generate an error if the log file is missing.
  • notifempty: Do not rotate the log file if it’s empty.
  • delaycompress: Compress the rotated log files one day after rotation.
  • compress: Compress the rotated log files using gzip.
  • postrotate/endscript: Execute a custom command after log rotation. This one is very cool! I didn’t use it on this example. Good for running scripts before or after.

Save the Configuration File: Save the changes to the “postfix” configuration file and exit the text editor.

Test Log Rotation: Test the log rotation configuration using the logrotate command with the -d or --debug option to simulate rotation without actually rotating files:

sudo logrotate -d /etc/logrotate.d/postfix

You can add -v option for verbose output. Use it! Verify that log rotation works as expected and that no errors occur.

Automatic Rotation: Logrotate is typically run as a cron job. It will automatically rotate log files based on the schedule defined in the configuration file.

As usual, you can contact me if you have any questions or like to collaborate. I’m working on fixing my online shop for now. Stay tuned.

Find uptime on Windows

The other day a client wanted to find out if his Windows machines were rebooting over the weekend. They had a scheduled task to apply updates and reboot. He wanted to quickly find uptime on Windows. I found some options:

Command line to find uptime on Windows

  1. Open your command line
  2. Type Systeminfo | find “Boot Time”

You should see something like:

Days: 9 Hours: 11 Minutes: 20 Seconds: 12

You can also query a remote Windows machine, but it seems to take longer to get results:

systemsinfo /S "machineName" | find "Boot Time"

Use the Task Manager to find uptime on Windows

  1. Open your task manager
  2. Expand ‘More Details’
  3. Go to Performance
  4. CPU
  5. See on the bottom ‘Up time’

Another way tricky way

This command is a bit tricky and I still need to find out how to read the results:

C:\Users\Tom> wmic path Win32_OperatingSystem get LastBootUpTime
LastBootUpTime
20240323020044.500000-240

After putting this brief tutorial together I decided to brew a good cup of Caturra coffee at home and head out for a walk.

As usual, you can contact me if you have questions or like to collaborate.

SEO Ready Website for Maximum Visibility

Making your website SEO ready involves optimizing various aspects of your site to improve its visibility and ranking in search engine results. I have to admit this topic is not my favorite, but is important if you want your website to be found by our good friend Google and other search engines.

After brewing my favorite coffee from Peru I decided to put a brief key steps to make your website SEO ready:

SEO analysis

  1. SEO title width: The SEO title has a viewable limit. Under 48 characters should be OK. Don’t make it too short either. Have at least four words. Include the focus keyphrase at the beginning.
  2. Text length: Write at least 300 words for your blog content. Include the focus keyphrase at least 2 times depending on the length of your text.
  3. Meta description length: The length should be from 120 to 156 characters long. Always include the focus keyphrase here.
  4. Key phrase length: Use the recommended maximum of 4 content words.
  5. Internal links: Have at least one internal link. Point it to a relevant content in your website.
  6. Outbound links: Have at least one external link.
  7. Images: Include at least one image with ALT tag using your focus keyphrase.

By following these steps and staying up-to-date with SEO best practices, you can make your website more SEO friendly and improve its visibility in search engine results.

SEO Website Readability

To enhance the ranking of your content, ensure it’s engaging and enjoyable to read. When your audience appreciates your writing, search engines will also do. Here are some steps to follow and make your website readability optimized:

  1. Passive voice: Use enough active voice. The recommended passive voice is 10%.
  2. Consecutive sentences: Use enough variety for your content. Do not start more than two sentences with the same word.
  3. Subheading distribution: Use subheading to divide your content. Probably use a subheading for every 150 words. Try to use your focus keyphrase in it.
  4. Paragraph length: Should be more than two sentences and less than 200 words.
  5. Sentence length: Do not write sentences longer than 20 words.
  6. Transition words: Use transition words for about. The recommended is 30% of your content should use transition words.

Print the email addresses with status=”bounced” in postfix mail log

To print the email addresses with status “bounced” only from the provided log file, you can use grep to filter lines containing “status=bounced” and then use awk to extract the email addresses. Here’s how you can do it:

grep "status=bounced" /var/log/mail.log | grep -oE 'to=<[^>]+>' | awk -F'<' '{print $2}' | awk -F'>' '{print $1}'

Explanation:

  • grep "status=bounced" /var/log/mail.log: This grep command filters lines from the mail log file containing “status=bounced”.
  • grep -oE 'to=<[^>]+>': This grep command uses a regular expression (-oE) to extract occurrences of “to=<…>” from the filtered lines and only outputs the matching text.
  • awk -F'<' '{print $2}': This awk command uses “<” as the field separator (-F'<') and prints the second field (the part after “<“).
  • awk -F'>' '{print $1}': This awk command uses “>” as the field separator (-F'>') and prints the first field (the part before “>”).

This series of commands will extract and print only the email addresses from lines with status “bounced” in the log file, removing any surrounding characters or additional information.

Continue reading “Print the email addresses with status=”bounced” in postfix mail log”

How to use pflogsumm command to analyze postfix logs

To use the pflogsumm command to find general rejection reasons in your mail logs, you need to ensure that your mail server logs contain the necessary information about rejected or bounced emails. Once you have the logs, you can use pflogsumm to analyze them and generate a summary report that includes rejection reasons. Here’s how you can do it:

Install pflogsumm:

sudo yum install postfix-perl-scripts

or

sudo apt-get install postfix-pflogsumm

Once installed, you can run pflogsumm with the path to your mail log file(s) as an argument. For example:

pflogsumm /var/log/mail.log

This command will analyze the mail logs and generate a summary report, including various statistics and information about mail delivery, rejection reasons, etc.

I feel pflogsumm give a general idea, but it does not show you specific emails that are deferring and/or reason(said). I need to look deeper into this and wear my favorite soy hat beanie to concentrate better.

AWStats install and configure in RedHat RHEL

After brewing a great Latin America coffee from Peru I received an email from on of my clients. The client recently installed Apache on RedHat RHEL and wanted to install and configure AWStats in order to track their traffic.

Personally, I don’t have much experience dealing with log files other than just using vim or grep to find the info I need. Anyways, I decided to wear my Espresso machine beanie and take a walk in NYC during the first snow day.

First, you need to SSH to your Linux server and install the perl dependencies:

sudo yum install perl-Time-HiRes perl-libwww-perl 

Next, download the latest version of AWStats or use yum to install it. This is what I did to get the latest AWStats version:

wget https://downloads.sourceforge.net/awstats/awstats-7.9.tar.gz 

Extract the tar.gz

tar xvzf awstats-7.9.tar.gz 

Move the extracted folder to Apache root directory:

mv awstats-7.9 /var/www/html/awstats

Configure AWStats

Make a new directory in:

mkdir /etc/awstats

Then copy the sample config file from the awstats folder you moved above to your /etc/awstats:

sudo cp /var/www/html/awstats/wwwroot/cgi-bin/awstats.model.conf /etc/awstats/awstats.sample.com.conf 

Replace the name awstats.sample.com with your domain name/hostname.

Edit the new config you moved above. Use the text editor of your choice:

sudo vim /etc/awstats/awstats.sample.com.conf 

There are 3 main options I configure for my case:

LogFile: Path to your apache log file

SiteDomain: Domain name of your site.

DirData: Location AWStats will store data.

Configure Apache

Create a new host file:

sudo vim /etc/httpd/conf.d/awstats.<your_domain>.conf

You can use this sample conf file:

<VirtualHost *:80>
  ServerName awstats.example.com
  DocumentRoot /var/www/html/awstats/wwwroot
  ScriptAlias /awstats/ /var/www/html/awstats/wwwroot/cgi-bin/
  <Directory /var/www/html/awstats/wwwroot/cgi-bin/>
    Options ExecCGI
    AllowOverride None
    Order allow,deny
    Allow from all
  </Directory>
  <Directory /var/www/html/awstats/wwwroot/>
    Options None
    AllowOverride None
    Order allow,deny
    Allow from all
  </Directory>
</VirtualHost>

Save changes. You may need to reload Apache.

Go back to:

cd /var/www/html/awstats/ 

and run the perl script in there:

perl tools/awstats_updateall.pl now-config=<your_domain>.com -awstatsprog=./wwwroot/cgi-bin/awstats.pl 

Finally, access your stats using this URL:

http://awstats.<your_domain.com>/awstats/awstats.pl

You should see some data displaying:

Configure AWStats RedHat server
Configure AWStats RedHat server

Hopefully this can help anybody to install and configure AWstats on your server. There are some tasks you should consider. Setup a cronjob to autoupdate AWStats database so it can access new log files. We should place the two commands above into a cronjob:

#maybe run every hour or daily?
0 * * * * cd /var/www/html/awstats/ && perl tools/awstats_updateall.pl now -config=example.com -awstatsprog=./wwwroot/cgi-bin/awstats.pl

Change the cronjob as needed. Thanks!