Tanbir Ahmed Official

January 16, 2019
by Tanbir A.

Can you use a GPU as a CPU? Why/why not? If not, what’s the difference between them that makes it impossible?

While a GPU can do all the operations a CPU can (which mean that you could, in theory use it like a CPU), the architecture isn’t optimized for it would make it very inefficient.

While CPUs and GPUs are basically the same thing (processors), both have different goals: the CPU is optimized for latency, and the GPU is optimized for throughput. (The goal of the CPU is to do any sequence of operations in the smallest possible amount for time, while the goal of the GPU is to do the maximum amount of work per amount of time).

To do this they both use different architectures/layouts: the CPU has a few, very big, fast cores and the GPU has hundreds/thousands of tiny, slow “cores”.

So to make an analogy:

  • The CPU is a supercar: two seats, 200mph top speed.

  • The GPU is an articulated bus: 400 seats, 30mph top speed.

If you wanna do one (or two) things really fast, the CPU wins. If you want to do the same thing over and over and over again a billion time (and don’t care how long it takes to do it just once), the GPU wins.

How does this look like on the chip then?

To really understand this, you need to know that in a CPU, the circuit that does the actual computation (let’s call it the ALU) is incredibly fast and the most important thing for CPU speed isn’t to make it faster, but to keep it fed with work to do and data to work on.

For this reason CPUs have a ton of extra circuits whose job is to keep the ALU busy (caches, predictors, schedulers, buffers, …). GPUs don’t do that as much. GPUs are designed to process pixels or triangles, and there are millions of them on a screen.

The repetitive nature of the work done on a GPU means that most cores will work on the same kind of thing at the same time, and the circuit that feed them with instructions and data can be shared across cores. And since you don’t care how long it takes for a single pixel to be computed, but rather how long it takes for the whole screen, each GPU core can afford to compute several pixels in parallel to amortize wait times (if the computation for a pixel has to wait for data from memory, the core can just switch to some other pixel).

The resulting architecture is very different: instead of having big cores with their own ALU and a huge control circuit to make the ALU happy, The GPU has groups of cores that share the same control circuit. This means that they can have a lot more ALU (because they don’t need as much control stuff), but that cores aren’t all independent. Cores withing a group have to work on the same thing, which is fine when doing graphics but can lead to atrocious performance when trying to do one single thing.

January 4, 2019
by Tanbir A.

Easy Auto test download and I/O speed script for Linux Server

I found this test script online searching for a good vps provider. One of the user shared his test result and linked to this script. The script is fully automatic. All you have to do is login to your vps or server via SSH and run one command. That’s it. Thanks to Teddysun.

The test script bench.sh is almost fully applicable to the network (downstream) and IO testing of various Linus distributions. The test results are displayed in a more aesthetic way.

Features of bench.sh:

  1. Display various system information of the current test
  2. Download speed tests are more comprehensive, taken from test points of well known data centers in many parts of the world.
  3. Support IPv6 download speed measurement.
  4. The IO test is done three times and the average is displayed.

Usage Instructions:


wget -qO- bench.sh | bash


curl -Lso- bench.sh | bash


bench.sh is both a script name and a domain name. So don’t get confused.

Github: https://github.com/teddysun/across/blob/master/bench.sh

Below is the output from my VPS:

June 17, 2013
by Tanbir A.

HOW TO RSYNC via SSH (Backup)

A very easy copy & paste tutorial on how to backup your very important files from your VPS, to another VPS, or linux server.

This shows to to make a secure connection between servers via SSH, so a password is no longer required between these two servers, and only those two servers to talk to each other.

Make note of your servers, (this tutorial shows how to back up one VPS, to another VPS), the main server you wish to backup, and the backup server.



Follow the commands in sequence to start making backups, and changing Blue text to the appropriate user, or IP Address.

Main> ssh-keygen -t rsa -f .ssh/id_rsa

-t is the encryption type
-f tells where to store the public/private key pairs. In this case, the .ssh directory on home is being used

A password will be asked; leave this part blank, just pressing <enter>
Now, go the .ssh directory, and you will find two new files: id_dsa and id_dsa.pub. The last one is the public part. Now, copy the public key to the server machine

Main> cd .ssh
Main> scp id_rsa.pub user@Backup:~/.ssh/id_rsa.pub

Of course, this time you will need to enter the password.
Now, login into the server machine and go to the .ssh directory on the server side

Main> ssh user@Backup
Backup> cd .ssh

Now, add the client’s public key to the know public keys on the server

Backup> cat id_rsa.pub >> authorized_keys
Backup> chmod 644 authorized_keys
Backup> rm id_rsa.pub
Backup> exit

Some useful Backup commands:

  1. rsync -ravz -e “ssh” user@Main:/home /root/backup/$(date +”%d-%m-%Y”)/
  2. rsync -ravz -e “ssh” user@Main:/etc /root/backup/$(date +”%d-%m-%Y”)/
  3. rsync -ravz -e “ssh” user@Main:/var /root/backup/$(date +”%d-%m-%Y”)/
  4. rsync -ravz -e “ssh” user@Main:/var/lib/mysql /root/backup/database/$(date +”%l%p-%d-%m-%Y”)/

To make this process completely automated, crate some cron jobs for these for every hour, day, week, or month, dependent on you needs of backups.
As well as making another cron job to delete backup files older then 30 days, to keep backup space for daily backups!

find /path/to/files/* -ctime +30 -exec rm -rf {} \+


Full Credit to http://codedpenguin.com

June 17, 2013
by Tanbir A.

SolusVM Exploit, time to look at Proxmox a bit more

Today a huge number of VPS providers got attacked with a zero day SolusVM exploit early this morning. It was bad, I mean real bad.


So I guess now its time to look more at Proxmox. An open-source virtualization management solution for servers. It is based on KVM and container-virtualization and manages virtual machines, storage, virtualized networks, and HA Clustering.

It means its FREE.

Try your self today: http://proxmox.com/proxmox-ve

June 15, 2013
by Tanbir A.

What if Superman Punched You?

So What will happen is Superman punched you?

The answer is: Superman wouldn’t just knock the wind out of you — oh, no… He would knock the atoms out of you.

For more details on this subject view this video below.

June 12, 2013
by Tanbir A.

RamNode has a 38% off coupon for this week only

ramnodeOne on my favorite SSD VPS provider RamNode is celebrating there one year anniversary with 38% OFF for life coupon for this week only.


This coupon will expire in a week, so grab your RamNode VPS fast!

RamNode have been ranked in the top 4 for the last three quarters on lowendbox.com

Their VPS’s are hosted in Atlanta and Seattle. Servers are located at 55 Marietta (Atlanta) and The Westin (Seattle), they own all of their hardware and network (AS3842).

RamNode offers KVM and OpenVZ VPSs.

VPS Features

Each VPS come with the following features:

  • SolusVM control panel
  • 1Gbps fair share port speed
  • INSTANT setup
  • Weekly remote backups

For more details on the offer visit: http://lowendtalk.com/discussion/11055/ramnode-one-year-celebration-38-off-limited-time-your-favorite-ssd-vps-provider

June 6, 2013
by Tanbir A.

Automatically Reboot server when it runs out of memory

Reboot server on out-of-memory condition

Still, in cases where something goes awry, it is good to automatically reboot your server when it runs out of memory. This will cause a minute or two of downtime, but it’s better than languishing in the swapping state for potentially hours or days.

You can leverage a couple kernel settings and Lassie to make this happen on Linode.

Adding the following two lines to your /etc/sysctl.conf will cause it to reboot after running out of memory:


The vm.panic_on_oom=1 line enables panic on OOM; the kernel.panic=10 line tells the kernel to reboot ten seconds after panicking.

Read more about rebooting when out of memory on Linode’s wiki.

June 6, 2013
by Tanbir A.

Linux Advanced Security Setup

Prevent repeated login attempts with Fail2Ban

Fail2Ban is a security tool to prevent dictionary attacks. It works by monitoring important services (like SSH) and blocking IP addresses which appear to be malicious (i.e. they are failing too many login attempts because they are guessing passwords).

Install Fail2Ban:

sudo aptitude install fail2ban

Configure Fail2Ban:

sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sudo nano /etc/fail2ban/jail.local

Set “enabled” to “true” in the [ssh-ddos] section. Also, set “port” to “44444” in the [ssh] and [ssh-ddos] sections. (Change the port number to match whatever you used as your SSH port).

Save the file and restart Fail2Ban to put the new rules into effect:

sudo service fail2ban restart

Add a firewall

We’ll add an iptables firewall to the server that blocks all incoming and outgoing connections except for ones that we manually approve. This way, only the services we choose can communicate with the internet.

The firewall has no rules yet. Check it out:

sudo iptables -L

Setup firewall rules in a new file:

sudo nano /etc/iptables.firewall.rules

The following firewall rules will allow HTTP (80), HTTPS (443), SSH (44444), ping, and some other ports for testing. All other ports will be blocked.

Paste the following into /etc/iptables.firewall.rules:


#  Allow all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0
-A INPUT -i lo -j ACCEPT
-A INPUT ! -i lo -d -j REJECT

#  Accept all established inbound connections

#  Allow all outbound traffic - you can modify this to only allow certain traffic

#  Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL).
-A INPUT -p tcp --dport 80 -j ACCEPT
-A INPUT -p tcp --dport 443 -j ACCEPT

#  Allow ports for testing
-A INPUT -p tcp --dport 8080:8090 -j ACCEPT

#  Allow ports for MOSH (mobile shell)
-A INPUT -p udp --dport 60000:61000 -j ACCEPT

#  Allow SSH connections
#  The -dport number should be the same port number you set in sshd_config
-A INPUT -p tcp -m state --state NEW --dport 44444 -j ACCEPT

#  Allow ping
-A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT

#  Log iptables denied calls
-A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7

#  Reject all other inbound - default deny unless explicitly allowed policy


Activate the firewall rules now:

sudo iptables-restore < /etc/iptables.firewall.rules

Verify that the rules were installed correctly:

sudo iptables -L

Activate the firewall rules on startup:

sudo nano /etc/network/if-pre-up.d/firewall

Paste this into the /etc/network/if-pre-up.d/firewall file:

/sbin/iptables-restore < /etc/iptables.firewall.rules

Set the script permissions:

sudo chmod +x /etc/network/if-pre-up.d/firewall

Get an email anytime a user uses sudo

I like to get an email anytime someone uses sudo. This way, I have a “paper trail” of sorts, in case anything bad happens to my server. I use a Gmail filter to file these away and only look at them occasionally.

Create a new file for the sudo settings:

sudo nano /etc/sudoers.d/my_sudoers

Add this to the file:

Defaults    mail_always
Defaults    mailto="[email protected]"

Set permissions on the file:

sudo chmod 0440 /etc/sudoers.d/my_sudoers

This is isn’t mentioned anywhere on the web, as far as I know, but in order for the “mail on sudo use” feature to work, you need to install an MTA server. sendmail is a good choice:

sudo aptitude install sendmail

Now, you should get an email anytime someone uses sudo!

June 6, 2013
by Tanbir A.

Linux Basic Security Setup

Create a new user

The root user has a lot of power on your server. It has the power to read, write, and execute any file on the server. It’s not advisable to use root for day-to-day server tasks. For those tasks, use a user account with normal permissions.

Add a new user:

adduser <your username>

Add the user to the sudo group:

usermod -a -G sudo <your username>

This allows you to perform actions that require root priveledge by simply prepending the word sudo to the command. You may need to type your password to confirm your intentions.

Login with new user:

ssh <your username>@<your server ip>

Set up SSH keys

SSH keys allow you to login to your server without a password. For this reason, you’ll want to set this up on your primary computer (definitely not a public or shared computer!). SSH keys are very convenient and don’t make your server any less secure.

If you’ve already generated SSH keys before (maybe for your GitHub account?), then you can skip the next step.

Generate SSH keys

Generate SSH keys with the following command:

(NOTE: Be sure to run this on your local computer — not your server!)

ssh-keygen -t rsa -C "<your email address>"

When prompted, just accept the default locations for the keyfiles. Also, you’ll want to choose a nice, strong password for your key. If you’re on Mac, you can save the password in your keychain so you won’t have to type it in repeatedly.

Now you should have two keyfiles, one public and one private, in the ~/.ssh folder.

If you want more information about SSH keys, GitHub has a great guide.

Copy the public key to server

Now, copy your public key to the server. This tells the server that it should allow anyone with your private key to access the server. This is why we set a password on the private key earlier.

From your local machine, run:

scp ~/.ssh/id_rsa.pub <your username>@<your server ip>:

On your Linode, run:

mkdir .ssh
mv id_rsa.pub .ssh/authorized_keys
chown -R <your username>:<your username> .ssh
chmod 700 .ssh
chmod 600 .ssh/authorized_keys

Disable remote root login and change the SSH port

Since all Ubuntu servers have a root user and most servers run SSH on port 22 (the default), criminals often try to guess the root password using automated attacks that try many thousands of passwords in a very short time. This is a common attack that nearly all servers will face.

We can make things substantially more difficult for automated attackers by preventing the root user from logging in over SSH and changing our SSH port to something less obvious. This will prevent the vast majority of automatic attacks.

Disable remote root login and change SSH port:

sudo nano /etc/ssh/sshd_config

Set “Port” to “44444” and “PermitRootLogin” to “no”. Save the file and restart the SSH service:

sudo service ssh restart

In this example, we changed the port to 44444. So, now to connect to the server, we need to run:

ssh <your username>@future.<your domain>.net -p 44444

Update: Someone posted this useful note about choosing an SSH port on Hacker News:

Make sure your SSH port is below 1024 (but still not 22). Reason being if your Linode is ever compromised a bad user may be able to crash sshd and run their own rogue sshd as a non root user since your original port is configured >1024. (More info here)