Wednesday, May 30, 2007

Linux getIP script (external IP from behind a NAT, that is)

I already knew that in linux it all comes down to resourcefulness.. Once again its been proven. You can do anything you want with the proper combination of basic commands!

I was tired of depending on dyndns, especially since I have an own URL. Problem was, how to get my external IP adress. The first commands I thought about were traceroute / ifconfig / ping / .. but, I soon realized this wouldnt work. A lot of data but just not what I needed :) So I googled it & it came up with a 'lynx -dump "http://checkip.dyndns.org"' style command. This sounded a bit bloated to me, but I gave it a try. didnt turn up too much. Another person used "html2text" & .. I didnt even try that one. Those just arent the standard commands.. Well, they are, but they do A LOT MORE!

So whats the solution? "wget"! :) So far so good.. "wget url".. but thats still relying on dyndns. And I Didnt want that. Now, I use a standard D-Link router. This router has a device info page which contains the external address. This is accurate.

So we 've got the data, but A LOT of data. Too much :)
& we 've got grep for that. So lets craft ourselves a grep command!

The external IP is on a line on its own. 3 lines above it, there 's the word "Address" so we'll do a "grep Address FILE -A 5" (-A 5 to get the 5 trailing lines)

Now we have 4 of those occurrences. We only need the IP addresses, so we ll pipe that output to "grep -E [0-9]+\.[0-9]+\.[0-9]+\.[0-9]\+ "

This gives us 2 addresses, an internal one for the router himself (yeah, its a bitch) and the valuable external one! Now lets separate these :) We dont know anything about the later, but we obviously know the first.. So another quick grep here.. "grep -v 192.168.0." so we dont see any internal addresses.

All this put together gives us the following

wget -q --user username --password password http://192.168.0.1/st_device.html
grep Address st_device.html -A 5 | grep -oE [0-9]+\.[0-9]+\.[0-9]+\.[0-9]+ | grep -v 192.168.0.
rm -f st_device.html*

This outputs the external IP to the command line, ready to upload with ftp or do whatever the fuck with it :)

I'll probably "grep ... > file" and "ncftpput -u User -p Password -r2 -t600 -DD -m Server URL SourceFile" & maybe even throw in a "diff" to determine if the ip address is different than the former. We might keep it in a file or download it from the place we re uploading it to.
And No worries about uploading a wrong one since we re uploading over the same router we re asking the info to. If he cant provide it, he cant upload either. :)

Now throw this in a cron & no more need for dyndns :) Provided the space we re uploading it to stays up, ofcource..

You might also want to throw in some checks, for example, if the file isnt downloaded or empty, dont bother running the rest. Dont squander the processing time (if only by principle) & most of all, dont risk uploading an empty file over your ip adress in the file :) (if for example the router reboots during wget but is up & running again by the time the ftp kicks in)

Something to consider here tough, is security. We re doing rm -f, downloading pages, executing crap, .. putting this in a cron, uploading over cleartext, whatnot. This, however, is beyond the scope of my blog post :) Go read a book or whatever!


Interesting process to observe while solving problems like this.. its quite simple actually
1. Ask yourself.. what information do you you Want?
2. Locate that information. independent of where it is or how to get to it. Thats for later
3. Determine how the information is served (this is a crucial step, it sounds dumb, but it is!)
4. now locate a linux commandline client for that protocol. There are more than you can imagine. pdf2txt, wget, ftp, .......
5. script it together. once you ve got the data, its simple to reformat it. Only a matter of time :)

This DOES obviously sound cheap.. but did YOU immediatly think about the smegging wget to the DLink? I know I didnt. Because I immediately thought that information was not easy to come by. I thought of telnet client stuff or whatever.. but.. forgot that its simply there in the web interface!
Follow the steps & carefully consider before writing off a source in the second step. :)


Edit.. for those too lazy to write the necesary code.. all you have to do here is edit the variables!

# Variables
## Location & filename for temporary files
map="/tmp/"
file="IpFile"

## Remote Webserver settings
server=foo.bar
serverU=foo
serverP=bar
serverMAP=/public_html/foo/bar/

## Webpage with IP info page
IPpage=http://acme/ip.html
IPu=foo
IPp=bar

## IP base to ignore - to filter out local IPs
IPbase=10.0.0.


## Do not edit beyond this point ;)

if [[ $map == "" || $map == "/" || $file == "" ]]
then
echo "Thank god I didnt trust you to fill these in wisely."
echo " (dont get over confident tough, this is an EXTREMELY basic test!!)"
echo "At least One of the following errors was found!"
echo
echo "* the map and file variable can NOT be empty!"
echo "* the map can NOT be / (the root)."
echo "Since we re doing rm -f, these settings might cost you your OS!"
echo " ;)"
echo
echo $map$file
exit 1
fi


# make sure there s no files that could screw the process ;)
rm $map$file* -f

# Get IP
wget -q --user $IPu --password $IPp $IPpage -O $map$file.router

# check for file existance
if [[ -e $map$file.router ]]
then
grep Address $map$file.router -A 5 | grep -oE [0-9]+\.[0-9]+\.[0-9]+\.[0-9]\+ | grep -v $IPbase > $map$file

# Get curent file
ncftpget -cV -u $serverU -p $serverP $server $serverMAP$file > $map$file.O

if [[ ! -z $( diff -q $map$file $map$file.O ) ]]
then
# Upload file
ncftpput -V -u $serverU -p $serverP -r2 -t600 -DD -m $server $serverMAP $map$file
fi

fi

# remove residual files
rm $map$file* -f


asfor WHY I took the time to do this? because I needed this script myself :)

I just realised.. ncftp is a great ftp program but not really standard. You can replace this by vanilla ftp tough you ll have to figure out those commands on yer own..
The rest is pure vanilla linux soft (grep & wget)

No comments: