Heckroth Industries

Linux

Launching screen only if the current terminal isn't already in a screen instance

If you like to use screen then this is a useful piece of code to add to the end of your .bashrc file. It will launch screen, but only if it isn’t currently in an instance of screen. The great thing about this is that it works across machines so you can put it in your .bashrc on all the machines you like and then when you ssh into the first one you get a new screen session, but ssh into a second machine from within that screen session and the second machine won’t launch another session of screen.

command_exists () {
    type "$1" &> /dev/null ;

# If we are in a ssh tty and not already running screen and screen exists then think about starting it
if [ $SSH_TTY ] && [ ! $WINDOW ] && [ "$TERM" != "screen.linux" ] && command_exists screen; then
    # If we don't have any attached screen sessions run screen and attach to the first dettached session
    SCREENLIST=`screen -ls | grep 'Attached'`
    if [ $? -eq "0" ]; then
        echo -e "Screen is already running and attached:\n ${SCREENLIST}"
    else
        screen -U -R
    fi
fi
Jason — 2013-12-03

Linux or GNU/Linux

Yet again an old argument discussion has reared its head and so I have decided to voice my opinion The discussion in question is “should it be called Linux or GNU Linux?”

To me the idea that I should call the operating system GNU Linux is wrong. The operating system is Linux, the Kernel is Linux. Yes I run a lot of GNU software on Linux, but I run a lot of GNU software on my Windows machines and no-one tells me I should refer to it as GNU Windows.

I also think that it is detrimental for GNU when people refer to Linux as GNU Linux. It sets up people to only think of GNU software if they are looking for software on Linux, when in fact there is GNU software available for most OS’s.

So in conclusion, its going to be called Linux by me for the foreseeable future. (Though it looks like my CMS has decided to call it GNU Linux via its tag ordering)

Jason — 2012-05-11

comm the opposite of diff

Today I needed to find matching lines in a number of text files. My first thought was, what is the opposite of diff? the answer is comm. To compare two text files and output lines that appear in both use

comm -1 -2 <file 1> <file 2>

To get matching lines between 4 files I redirected the output to tempory files and then comm’d them.

comm -1 -2 <file 1> <file 2> > tmp1
comm -1 -2 <file 3> <file 4> > tmp2
comm -1 -2 tmp1 tmp2

You can pipe into comm by using ‘-’ instead of a filename so you could also compare 4 files with

comm -1 -2 <file 1> <file 2> | comm -1 -2 - <file 3> | comm -1 -2 - <file 4>
Jason — 2011-06-21

Why I use cat even though there are more efficient methods

When using a series of commands tied together with pipes I usually start with the cat command. A lot of times when I post a one liner solution on a forum someone will reply that there was no point in starting with cat as it is inefficient. So I decided to put a quick post about why I use cat rather than one of the other methods.

The main reason that I use cat at the start of most strings of pipes is that it is easier to maintain. The logical flow of the data is going from left to right and the files that go into the pipe is easy to spot e.g.

cat /etc/passwd | grep bash | grep -v :x:

We can see here that /etc/passwd gets pushed through grep first to find those lines containing bash. Then those lines are pushed through grep again looking for lines that don’t contain :x: (i.e. non shadowed passwords). This could have been written in a number of different ways.

grep bash /etc/passwd | grep -v :x:
</etc/passwd | grep bash | grep -v :x:

In these examples the first way would be reasonable, but the original file at the start of the pipe is a little hidden tucked away in the first grep command. The second way puts the original file at the start and is very clear, but a typo of > instead of < will destroy the file I am really wanting to read from.

So yes there are more efficient ways to start off a string of pipes, but I like to to use cat as it makes things a bit more obvious than some and less prone to destroying data with a simple typo than others.

Jason — 2011-01-10

Making my SoSmart work the way I want it to

Last year I purchased a Dane-Elec SoSmart media player. It has 1TB disc in it and numerous outputs, including HDMI. As a media player I can’t really fault it. The only problem that I had was that it uses the NDAS protocol to share the disc, it doesn’t use SMB or any other sane protocol.

For those that don’t know what the NDAS is, and I didn’t until I got this, rather than using a protocol to share/transfer files it uses the NDAS protocol to share the physical disc. There are drivers for Linux as well as Windows, but the speeds it got were terrible. Was there a solution?

Kind of, Sir Dekonass had produced a custom firmware build that would let you access the device using both telnet and ftp. Not exactly secure but on my little home network not much of a problem. Great, except I didn’t have write access to the disc, I only had read only access. I telnet in, root didn’t have a password and I couldn’t set one either.

I then tried to unmount the drive and remount it, but it came back as read only again. I assumed this was due to it being ntfs and the driver being used. I connected to the device using a USB cable and I found that I could use fdisk to recreate the partition on the disk as a Linux partition and then use mkfs.ext3 to create a new file system on this. Restarting it I then discovered that it had mounted it as read only again even though it was now an ext3 partition. I tried unmounting it and mounting it again and this time I could write to the disc. Great except I had to log in each time I turned it on and remount the disk, which I really didn’t want to do.

It took a while of digging through the start up scripts on the SoSmart but I discovered that in the startup sequence it looks at each disc it has mounted for a file called mvix_init_script and if it exists execute it. I knew I couldn’t easily put it on the disc in the machine as I wanted to unmount it. To get around this problem I got an old USB flash drive and repartitioned it into two new partitions and formated them both as ext3.

On the first partition I created my custom mvix_init_script. On the second I copied the contents of the /etc directory. Inside my mvix_init_script I effectively have 3 sections.

The first unmounts the second partition on the USB and remounts it over /etc. This lets me add passwords and new users to the setup, so I now only have a passwordless root account for a very short period on startup.

The second section unmounts the internal hard disk and remounts it, making sure it is writeable. This lets me use ftp to transfer files over to the disk, which is a lot faster and easier than maintaining the NDAS setup.

The thrid section kills the shttpd processes and restarts them with a document root of just the internal hard disk. This stops people viewing things like /etc/passwd with it which really wouldn’t be a good thing.

Is this a perfect setup? Not really I would like to have SSH on it rather than telnet but then copying files over SSH would take longer than the simple ftp method.

It turns out that after googling “mvix_init_script”, the system is actually the mvix built for the sosmart and a lot of good information it available at http://mvixcommunity.com/ .

Jason — 2010-12-06

Quick way to use tcpdump to grab packets

Well every now and again I need to grab packets going to and from a specific port on a server machine. The client side isn’t a problem as I have Wireshark installed on my workstation, but when dealing with server to server communications I prefer to use the command line. Most servers I deal with don’t have X-windows installed as there is no need to waist the resources on it.

Of course all Unix style OS’s usually have a program called tcpdump which can be used to collect the packets we are interested in. Being a very powerful tool the man page can be a bit long and it can be hard to get started with it.

Usually I want to grab the packets going to or from a specific port, e.g. port 25 for SMTP or 80 for HTTP. Here are two examples that show how easy it is to use tcpdump

tcpdump -w /tmp/smtp.pcap -s 1500 -i lo 'tcp port 25'

This example will grab all tcp packets going to or from port 25 on the local host interface (127.0.0.1) and put them in the /tmp/smtp.pcap file. This file is in the pcap format so you can copy it to your workstation and dig into it with Wireshark. The -s parameter specifies how much of the packet to grab, usually 1500 will be enough to get the whole packet, but you may need to increase this if you find packets getting truncated.

tcpdump -w /tmp/http.pcap -s 1500 -i eth0 'tcp port 80'

This example will grab all tcp packets going to or from port 80 on the eth0 interface and put them in the tmp/http.pcap file. Again we want all the packets so we use a size of 1500.

Jason — 2010-11-17

Changing the hostname on Red Hat Enterprise Linux

So I have to change the hostname on a server running Red Hat Enterprise Linux (v5). As I have to do this quite often I figured that I would note down where I need to change it.

  • Update the HOSTNAME entry in /etc/sysconfig/network

  • Update the entry in /etc/hosts to have the correct new hostnames both with domains and without.

  • Update the list of hostnames that sendmail will accept emails for in /etc/mail/local-host-names.

  • run hostname <NEW_HOSTNAME>

  • This will update the hostname there and then, and avoids the need to restart the server to pick up the new hostname.

  • Finally, if your server has DNS entries pointing to it update these to include the new hostname.

Jason — 2010-11-01

Crontab oddity

I have just managed to figure out an oddity with crontab on a newish Red Hat Enterprise Linux machine. When ever I typed in crontab -e to edit the crontab it would open up an empty file rather the existing crontab. It turns out I needed to add in an EDITOR environment variable in my bash profile.

export EDITOR=/usr/bin/vim

Once the EDITOR environment variable was there everything worked. The annoying thing was that you don’t know there is a problem till the second time you edit your crontab at which point you instinctively close down the editor and crontab writes the empty file to your crontab, deleting the one you already had.

Jason — 2010-07-28

Always remember the block size option on dd

I am currently having to copy one 50GB partition over another 50GB partition. dd is great for this but the first time I did this I forgot to set the block size so dd defaulted to 512 bytes. An hour and 10 minutes later dd finished. The second time I remembered to set it to 20MB (just add bs=20M as one of the parameters for dd) and it took just under 9 minutes. Just to see what difference it makes I ran it a third time with a block size of 40MB and it took just over 7 minutes.

In the future I really should try harder to remember to set the block size when using dd.

Jason — 2010-07-27

RedHat Enterprise Linux YUM update glitch

Well I have just figured out why some of the machines that I use had stopped picking up updates. When I look at the list of systems on the RedHat Network they had a list of updates that they hadn’t picked up but when I logged into the machines and ran

yum update

it said there were no updates. After trying a lot of things on one of the systems that I was free to test with I still couldn’t find out what was going on. Eventually I discovered that yum had corrupted it’s cache and so it thought that its list of packages was up to date when it wasn’t. The solution was quite easy after that

yum clean all

yum update

I know, I should have tried that at the start.

Jason — 2010-06-18