3 Ways to Check if a Port is in use Linux

After a delayed flight from Montevideo in Uruguay I decided to improve my networking Linux skills and teach myself a bit more about how to check if a port is in use on Linux.

I must admit, I am not a networking person. To be honest, I don’t really like that branch in the tech world. I also, know this is super important to master. For now, I’m OK understanding and able to troubleshoot most common networking issues on Linux, macOS and Windows.

Check if port is in use on Linux

All commands are to be run using our friend the Linux Terminal. 🙂 I use bash.

lsof command

This tool was not installed on my Linux box. You can install it using yum/apt/or whatever package manager you’re using. Here are a couple sample commands.

To view if a specific port is open:

sudo lsof -i:22

View all ‘Listening’ ports:

sudo lsof -i -P -n | grep LISTEN

You should get an output similar to this:

check port is in use lInux
lsof command output

netstat command

The netstat command is one I have used in the past on a Windows box, but always found it a bit complicated to understand the output. Anyways, below some sample outputs.

To view all open ports:

netstat -tulpn | grep LISTEN

To view if a specific port is open. For example SSH on port 22:

netstat -tulpn | grep ':22'

nmap command

The nmap command I have used it mainly in Ubuntu. I like it because you can scan a whole network and the output looks a bit more organized.

To scan your machine for open ports:

nmap localhost

You should get:

Starting Nmap 7.94 ( https://nmap.org ) at 2023-12-07 16:11 EST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000034s latency).
Other addresses for localhost (not scanned): ::1
Not shown: 997 closed tcp ports (conn-refused)
PORT     STATE SERVICE
22/tcp   open  ssh
88/tcp   open  kerberos-sec
5900/tcp open  vnc

You can run nmap on another host:

sudo nmap -sT -O 192.168.2.13 (TCP ports)
or 
sudo nmap -sU -O 192.168.2.13 (UDP ports)

Very cool! I hope this brief tutorial helps somebody out there. Contact me if you have questions or like to collaborate. Also, remember to check my coffee mugs and T-Shirt designs. Some are below too.

Postman Cannot Reach localhost macOS Sonoma

This was very frustrating I have to say! After coming to Punta del Diablo in Uruguay to have some time away from big cities like NYC. At the same time I wanted to do some self skills improvement while away. I started learning more about APIs, but I kept getting issues with postman not seeing my localhost. I was not aware upgrading from BigSur to Sonoma will also mess things up here.

After taking a long walk by the empty beach town in Punta del Diablo and coming back to my self learning adventure (learning Python REST APIs using Flask). I encounter another issue when I ran:

flask run

I got error:

Address already in use
Port 5000 is in use by another program. Either identify and stop that program, or start the server with a different port.
On macOS, try disabling the 'AirPlay Receiver' service from System Preferences -> General -> AirDrop & Handoff.

WTF! I said. But, wait…this error was useful. I did not know about this ‘AirPlay Receiver’ sh!t. So, I went to check those settings in my mac and turn it OFF:

Before doing the above I checked for open ports using my Terminal with the command lsoft -i :5000 I got:

After disabling ‘AirPlay Receiver’ Postman worked using:

localhost:5000

Why is Apple doing this? Why take port 5000 for these services? If you have any clues write them in the comments area below. Thanks.

You can always contact me if you have any questions or if you like collaborate.

Python3.9: bad interpreter: No such file or directory macOS Sonoma

Last week I decided to finally upgrade my macOS BigSur to Sonoma. As usual I get issues after the upgrade. For example, security settings, apps permissions, custom backup script I put together to backup my files to a NAS drive stop working. This time I got error “Python3.9: bad interpreter: No such file or directory“.

I usually play around with Docker for personal projects and to improve my skills. I was working on a Python/Flask/SQLAlchemy app and decided to use my native python instead of creating another container in Docker. To solve the error above I did the following: (after brewing a great cup of coffee from Peru at home of course!)

  1. Installed Homebrew
  2. After installing Homebrew I ran: brew install pyenv

Next, you can use pyenv to manage your python versions – Very cool! Use the below command for example:

pyenv install 3.9.2 

If you desire a different version do:

pyenv install 4.x.x

Remember, you need to have macOS Xcode/Developer tools installed on your mac before having Homebrew.

Final steps – Update your bash_profile(I still use bash instead of zsh – yes!):

echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bash_profile
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bash_profile

If you do not have a /bin directory in your pyenv_root folder (you may only have a /shims directory) you may need to instead run this version of the command:

`echo 'export PATH="$PYENV_ROOT/shims:$PATH"' >> ~/.bash_profile`

Add pyenv to your Terminal:

echo -e 'if command -v pyenv 1>/dev/null 2>&1; then\n  eval "$(pyenv init -)"\nfi' >> ~/.bash_profile

And finally, reset your Terminal:

reset

I need to thank freeCodeCamp for the reference above. Again, you can contact me if you have any questions or if you like to collaborate. Also, check out my designs below

No Matching Key Exchange Method Found when SSH Older Linux Server from macOS Sonoma

The other day I decided to upgrade to macOS Sonoma. As usual, when you do an OS upgrade something breaks. I was not able to SSH to an old Linux server.

Before, when on macOS BigSur I was able to SSH to any Linux box from my Terminal. My guess is OpenSSH improved security. Anyways, I decided to take a walk during the NYC Marathon, take some photos and then take on this one.

After upgrading from macOS BigSur to Sonoma I tried to SSH to a Linux box I normally do for a client and got the below error:

Unable to negotiate with <TheOldServerName> port 22: no matching key exchange method found. Their offer: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1

WTF! The solution for now is to include the below flags in your ssh command:

ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 -oHostKeyAlgorithms=+ssh-rsa myuser@<TheOldServerName>

This allowed me to login. I had to combine both options because I was also getting error:

'no matching key exchange method'

Hope this quick ssh tip helps somebody that just updated to macOS Sonoma. You can always contact me if you have any questions.

Docker compose –rmi all Command

This command got me very confused the other day while working on my little project app for tracking your blood pressure readings.

After brewing a fantastic medium roast specialty coffee from Brazil I found some brief explanations for this Docker command.

This post is going to be super brief. So, to sum up this is what I learned:

docker compose --rmi all 

Removes the images referenced in the docker-compose.yml file only.

The above will NOT remove images referenced in a Dockerfile that was used to build the services in a docker-compose file. For example:

 db:
    restart: always
    build: db/.

Inside my db directory I have a Dockerfile:

FROM mariadb:10.5
LABEL maintainer="ITPro Helper"
RUN apt update && apt install -y python3

Simple and brief example. I hope it helps. Some reference below:

https://github.com/docker/compose/issues/6971

RuntimeError: working outside of application context in Python Error

I have been working on a personal Flask app using SQLAlchemy for learning purposes and to improve my coding skills. As usual, I put it away for some months and then have to ‘re-learn’ what I did. Don’t do this! 🙂 I kept getting a new (at least for me!) error “RuntimeError: working outside of application context”.

In order to access the sqlite DB I run the python terminal and perform some imports for example:

from <myApp> import db
from <myApp.models> import User, Reading

The above used to work for me many months ago. When I tried last week I got the below error when trying a User.query.all()

RuntimeError: working outside of application context

After brewing a good coffee from Nicaragua using my Moka pot I was able to figure this one out. I discovered contexts in SQLAlchemy. The way to run the imports in Python terminal changed…at least for me since I did not keep up to date on Flask stuff. Below is my solution:

from mbp import create_app
app = create_app()
app.app_context().push()
from mbp.models import User, Reading

After running the above I was able to run a query:

User.query.all()

and was able to get all users from my DB:

[User('alcatraz', 'alca@traz.com', 'c815f485bc056281.jpg'), User('peperabane', 'pepe@rabane.com', 'default.jpg'), User('papa', 'papa@mama.com', '97a063dcd433adb7.jpg'), User('itpro', 'hello@itprohelper.com', 'default.jpg'), User('toby', 'rgmilanes@gmail.com', 'default.jpg'), User('alan', 'brito@yahoo.com', 'default.jpg'), User('paco', 'paco@yahoo.com', 'default.jpg')]

Hope this helps someone. You can always contact me if you have questions or like to collaborate. Also, remember to check my T-Shirt and coffee mugs shop. I make my own designs.

UPDATE: You should also refer to the latest SQLAlchemy documentation as of this writing. The link I provided above is what worked for me using version 2.x.

Invalid Reference Format Error in Docker

The other day I was enjoying the sunset in Williamsburg while wearing my irregular bucket hat. I got a text from a friend that’s working on a web app project. He kept texting me the error he got in his Linux terminal “Invalid Reference Format Error in Docker” while trying to build an image from a Dockerfile.

I asked him to send me more details. He was running:

docker build -t a-Beautiful-image .

I also tried to run it this way with docker run:

docker run -p 80:8080 a-Beautiful-image

And got the following more informative error:

docker: invalid reference format: repository name must be lowercase.

OH uh oh! Now it makes sense. You need to change the image name to all lower case:

docker build -t a-beautiful-image .

All works now!

Contact me if you have any questions. Also, take a look at my shop and let me know what you think.

Remove Network Credentials Windows 10

One of my clients texted me about having issues authenticating to a network share using his credentials. In this case somebody else was able to connect to the same network share, but he needed to connect using his own credentials.

The users found out that by rebooting Windows they can resolve this. But, who wants to reboot all the time?!!

After wearing my cool irregular bucket hat and brewed a good cup of coffee from Ecuador I was able to find a better way.

Removing the Network Credentials using the Command Prompt

After I found this solution I remembered I have done this years ago in Windows. This is what happens when you don’t document your solutions.

So, to remove the ‘cached’ network credentials open your command prompt and type the below to see which network shares are mounted and authenticated run:

net use

This will give you a list such as:

Status  Local  Remote  Network
OK             \\somesharename\somefolder

Now, to remote the above ‘cached’ network share run:

net use \\somesharename\somefolder /del

You should get a message:

\\somesharename\somefolder was deleted successfully.

Remember, you can always contact me if you have any questions. Also, remember to check my designs from my IT Handyman shop: (I make my own designs.)

Commenting within Jinja2 HTML Template

This one took me a while to find (in my case). I assumed it was not possible and was lazy to look around. But, after wearing my irregular bucket hat I decided to go for it.

I found it! and it was very simple. If you need to comment within a Jinja2 template you need to use {# #} tags. For example:

{# {{ Hello Hello soyhat.com }} #}

I have been trying the below and never worked for me:

<!-- {{ Hello hello }} -->

I still have a long way to go before I can feel comfortable building my web app using Flask. I will not give up. Contact me if you have any questions. Remember to check out my designs below.

Install Docker Red Hat 8 Workaround

After taking a walk at the park wearing my new Irregular bucket hat I got a call from a customer wanting to install Docker on Red Hat 8. I have installed Docker on Ubuntu and macOS, but never on Red Hat. I found something right away in the Docker’s documentation site:

We currently only provide packages for RHEL on s390x (IBM Z). Other architectures are not yet supported for RHEL, but you may be able to install the CentOS packages on RHEL. Refer to the Install Docker Engine on CentOS page for details.

I said shit! But, after reading the last sentence I got some hope back. I really don’t understand why Docker doesn’t support on x86_64 architecture.

Then I decided to follow the CentOS installation way by setting up the repository:

sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

I ran yum to install the latest Docker version:

sudo yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

You might be prompted to accept the GPG key. If so, go ahead and accept it.

Now Docker is installed, but NOT running. You need to start it:

sudo systemctl start docker

Next, verify Docker engine installation is running:

sudo docker run hello-world

The above command downloads a test image and run it in a container. You should see something like this:

Install Docker Red Hat 8

This means Docker has been successfully installed on Red Hat 8. Great that instructions for CentOS worked!

Now, if you like to run Docker without ‘sudo’. Follow the below steps:

sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker (this is to force group changes)

If you don’t run newgrp command then you will need logoff and login for changes to take effect.

Verify again without using sudo:

docker run hello-world

The above command should do the same as before. Contact me if you get errors.

If you like to configure Docker to start on boot with systemd do the following:

sudo systemctl enable docker.servicesudo 
systemctl enable containerd.service

You should see the output:

Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /usr/lib/systemd/system/containerd.service.

If you like to stop the above do the opposite:

sudo systemctl disable docker.service
sudo systemctl disable containerd.service

If you need to change logging drivers read this documentation.

Remember to check my shop. I design cool T-Shirts and coffee mugs. Thanks!