I’m going through a book called Violent Python, and the first program is building a password cracker. The example is great, but uses the crypt module, and DES for the example – this is a bit out of date. The author suggests trying to adapt the program for SHA512, which is used on many modern *nix systems.
So, I came up with this rudimentary program to crack SHA512 hashed passwords with 5k rounds from my /etc/shadow file. There are, of course, MUCH better ways to crack passwords, but hopefully this will be helpful to someone else going through this book.
# -*- coding: utf-8 -*-
from passlib.hash import sha512_crypt
def testPass(cryptPass, saltySalt):
dictFile = open('dictionary.txt', 'r')
for word in dictFile.readlines():
word = word.strip('\n')
cryptWord = sha512_crypt.hash(word, salt=saltySalt, rounds=5000)
if cryptWord == cryptPass:
print '[+] Found Password: ' + word + '\n'
print '[-] Password Not Found.\n'
passFile = open('shadow.txt')
for line in passFile.readlines():
if '$' in line:
user = line.split(':')
cryptPass = line.split(':')
saltySalt = line.split('$')
print '[*] Cracking Password For: ' + user
if __name__ == '__main__':
Ok, so now that we’ve got this bad boy installed, a new password set, and licensing taken care of, it’s time to start configuring!
The first thing we need to do is to define an IP address for management access, so from within the vSphere console window we do:
Then, it will go a little like this:
Now, we will need to save and activate this before you start getting all web-browser giddy and ditch the CLI for all those happy GUI times ahead.
So, let’s do that:
Sweet! Now we can get all visual and stuff. Browse to your IP address, and login with ‘admin’ and your newly created enable password. Click around a bit – enjoy it. then RTFM, and join us in the next post where we start really configuring shit.
I’ve recently had a chance to work with some ACME Packet 3820’s. A.K.A. Oracle blah blah blah something or other SBC. I dunno, really – every time a cool company gets bought out by a giant they re-name everything. We’re working with the Acme 7.3 Enterprise code.
If you have a login for the Oracle support site, and appropriate entitlement, you too can download this software from the Oracle Software Delivery Cloud.
What we want looks something like this:
Ahh, yes, the “Oracle Enterprise Session Border Controller” – that’s what it’s called. I’m still calling it ACME, bitches.
Once you get your grubby little hands on this .zip archive, and unzip the nested shit, you can go into the nnECZ730_64 directory. What do we see here? Wait! What’s that? An .ova? Sweet!
It sure is. You can run this bad-boy on the nearest ESXi box, and have a great lab setup without buying yourself a 3820. I like that price-point a little better.
Deploying is easy – it’s an .ova – DUH! (I’m not stepping you through that part – if you don’t know how, google it.) The cool thing is that this baby is small:
You can run this thing pretty much anywhere – like on your laptop with VirtualBox. I have an ESXi host for my lab, so I’m running it there.
During the OVA deployment, you need to do a few things like accept the EULA, pick your datastore, and importantly – assign the ‘physical’ interfaces to the right network. Mine looks something like this:
Couple more clickity clicks, and a 2 minute wait, and you too can pop over to the console and log in with a password of ‘acme’. Then type ‘enable’ and use the password of ‘packet’.
Now, I shouldn’t have to point this out – but this is probably not the most secure set of credentials – let’s remedy that. Type ‘secret login password’ and replace that ‘acme’ with something new. Then do ‘secret enable password’ and replace that ‘packet’ with something new as well.
Ahhh, I feel better now, don’t you?
Did you notice the CLI complaining about ‘No Valid License Present!’? I sure did. Let’s remedy this:
setup product (you only have one choice here, so choose it and save)
setup entitlement (I choose 500 sessions since this is a lab system and I will never ever ever get close to that. It goes up to 256,000 – that’s a lot of motherfrackin’ sessions!)
Now – lets give this fucker an IP address to use for further administration, shall we? We shall. In the next post.
Did you think I was done busting caps after that last post? Really? You must not have know that I’m an O.G.
Sometimes it’s not enough to be busting caps on the IP encapsulated stuff. Sometimes you need to pull a PCM trace from the router or VG224.
Cisco makes it easy for you if you’re using an ISR G2 router – or a VG350, but if you have to do it on a VG224 – it’s not so straight-forward. The commands to get the PCM.dat file are really no big deal. Capturing on port 2/0 looks like this:
Sometimes I bust so many caps I feel like I’m Woka Flokca:
Today I had to bust a cap on an Acme Packet 3820. We had been having a strange, intermittent one-way audio problem plaguing us for some time. All the SIP traces had looked good on the call examples, and we were unable to re-produce the issue. So, time went on, and once in awhile we would hear about another one. Always from the same guy, and always a wireless caller calling in from the PSTN to him.
This is one of the most frustrating things in IT. Intermittent problems with no real pattern that you cannot re-produce. Well, we had our breakthrough a couple of days ago. We had a few more people reporting problems, and found a pattern to it. It was happening only to the people who had mobility (Single Number Reach) enabled on their phones. We were able to re-produce it with the client, so I set up a test phone to mirror their configuration, and sure enough, I could re-produce it on my test phone, too. Jackpot!
So, I started to look at the traces. It was quite the call flow between 4 pairs of SBC’s, a CUCM SME cluster, a Siemens OSV, and a CUCM Leaf cluster. SIP messaging was bouncing around like hollow points ricocheting.
We were able to simplify it a little bit – get it down to 2 SBC pairs, take the OSV out of the call flow, and modify the route group so at least we could chose the SBC pair for the outbound call to the remote destination.
After doing this, and making several test calls, it was still very hit or miss. One in 4 or so would fail, with no real pattern to it. All the signaling looked good when scrutinizing the SDP. So what next?
I did what any good little UC engineer would do, and started busting caps. First, I put the 6945 I was using for testing on to a switch I had hanging out with me, and did a little monitor-session config:
First – filter those nasty BPDU’s from your uplink port so you don’t put the port on switch in the closet into an errdisable state:
switchport mode access
power inline never
spanning-tree bpdufilter enable
Then, pick your nose, and the ports that you want to use:
no monitor session 1
monitor session 1 source interface FastEthernet 0/1 both
monitor session 1 destination interface Fa0/2
Now, it was time to shark my wire, and what did I see? RTP in both directions:
Digging into the details – those little UDP packets were headed for the right port on the SBC, so it was time to capture on the other side.
Here’s what I did to send a copy of all the packets in/out of the SBC to my laptop:
3820_HOSTNAME# configure terminal
3820_HOSTNAME(capture-receiver)# state enabled
3820_HOSTNAME(capture-receiver)# address 10.98.1.198
3820_HOSTNAME(capture-receiver)# network-interface s1p0:0
Save Changes [y/n]?: y
last-modified-date 2015-04-25 18:57:06
Save-Config received, processing.
waiting for request to finish
Request to ‘SAVE-CONFIG’ has Finished,
Currently active and saved configurations do not match!
To sync & activate, run ‘activate-config’ or ‘reboot activate’.
Activate-Config received, processing.
waiting for request to finish
Request to ‘ACTIVATE-CONFIG’ has Finished,
3820_HOSTNAME# packet-trace remote start s1p0 10.98.61.45
Trace started for 10.98.61.45
3820_HOSTNAME# packet-trace remote start s0p0 192.168.205.12
So I was recently tasked with building out a UAT (user acceptance testing) environment for a customer. This is basically an exact replica of the production environment we have just spent 2 years designing and implementing. This allows the customer to test any upgrades, changes, or patches before actually implementing them into production. Plus its pretty cool if you are doing some dicey shit and wanna make sure you don’t bring the entire enterprise to its knees in the middle of the day.
Now to the problem. 30% or so of the UAT environment was already built out using a SU 1 version of call manager. Now you may say “who fucking cares”? Well guess what, I do. Cisco doesn’t publish a bootable version of the SU 1 version of call manager. Now when you are adding new subscribers to an existing publisher, the versions of call manager have to be exactly the same, or the new subscribers will shit all over themselves. So we came up with the brilliant idea to switch versions on all the UAT servers that were currently up and running. (I have a bootable ISO for this version). Once I got all the existing servers back to the old version, I could install all the new servers, do a switch version at the CLI of the existing servers, and then do a simple upgrade on all the new servers to the SU 1 version of Call Manager. Sounds easy right? Well it is pretty fucking easy, the only problem is it doesn’t fucking work. I got the entire environment built out, ran the switch version commands, then set off the upgrades. Came back and hour later to find all 20 or so of my new servers had failed the upgrade process.
So back to square one. I deleted all those new servers. So started noodling this shit some more. Now I am sure we have all seen this screen when building out a new environment.
If you are like me, you typically just tab right through this shit. But hey, this sounds like my scenario right? I have existing servers running a SU 1 version. I am installing new subscribers and I am lacking the bootable ISO. So why not install the bootable version, then press yes on this screen and apply the upgrade at the end. So again this sounds easy right? Fuckin ehh its easy. Now here is the issue. I am building 25 servers, there is no fucking way I am doing this without answer files. I could not for the life of me find a way to apply this upgrade patch, and use answer files at the same time. So i said fuck it, I will build one by hand without using the answer files and just see if it works. Guess what, that shit still failed. I was all out of ideas here. Did i mention I only had 4 days to build out and fully test this giant UAT environment. I had just spend 1 and a half of those 4 days fucking around with this stupid shit.
Now if you scroll down a few blog posts and see the post entitled http://www.backdoorburglar.com/?p=144….you will see that we had a major issue with deleting a few production servers ( no big deal right)….haha this was actually a huge fucking deal, but that’s an entirely different matter. The reason I am bringing this up is because during the huge debacle, we actually got Cisco TAC to confirm that you should not apply an upgrade patch during the install ( then why the fuck is that even on the install page?). This also prompted them to give us the bootable ISO of this SU 1 version of Call Manager that I was sorely in need of. So I went back and deleted all the new servers I had built (AGAIN) and rebuilt them using that bootable ISO. Now you may be thinking to yourself “what a lazy fucker, why didn’t he troubleshoot the issue of the failed upgrades instead of being a big vagina and taking the easy way out using the bootable ISO?” To that I say “fuck you” I was trying to cram 10 lbs of shit into a 5lb bag by trying to build out this ginormous environment in 4 days. I had already spent 1.5 days getting absolutely nowhere. Sometimes you gotta know when to hold em and know when to fold em.
Debugging is part of life as an engineer. If I had a dollar for every time I double and triple checked my configuration to confirm, then got to the point where it was time to run debugs, I would be a rich motherfucker.
Debugging on IOS is pretty damn good. I applaud Cisco for how well they build these sort of features into their products, not to mention how well documented they are. But, as Mick Jagger once said, “You can’t always get what you want”.
That’s what happened today when I started looking into a problem getting T38 to work with the following call flow: Verizon SIP trunk –> Acme Packet 3820 –> CUCM SME –> CUCM Leaf –> MGCP connected VG224
We see the re-invite for T38 version 0 sent to Verizon, and they send us a 200 OK with SDP indicating T38 version 0 as well. The call stays up for 32 seconds, and then we hang up and send Verizon a BYE.
All the debugs looked good, so it was time to start capturing packets. In this environment, it’s pretty difficult to get a network engineer to do a port span – so we chose to capture packets on the VG224.
Here’s the config we used to do this:
debug vpm signal
monitor capture buffer holdpackets
monitor capture point ip cef t38trace FastEthernet 0/0 both
monitor capture point associate t38trace holdpackets
monitor capture point start t38trace
monitor capture point stop t38trace
monitor capture buffer holdpackets export tftp://10.98.1.198/t38capture.pcap
no monitor capture point ip cef t38trace FastEthernet 0/0 both
no monitor capture buffer holdpackets
If you want to read more about capturing packets in IOS – check out this Link.
Have you ever fucked up so bad that it’s pitchfork in the ass bad?
Everybody has those day to day fuckups. Stuff you learn from and say, “Oh, silly me – I should pay closer attention. How did I ever leave my keys in the refrigerator?”
Shit like that is kinda funny, and you think about it next time you come home and decide to hang them on the hook that is dedicated to this function – instead of chilling them like a hooker or a penguin (which are ironically the only two things that never really get cold – google it).
Then there are the big ones – and they’re usually stupid mistakes with no real excuse. The kind of stuff that happens when you get in a hurry, and have a momentary lapse of reason on something really, really important. These are the ones that cause the defecation to hit the rotary oscillator in a triumphant manner.
One of my favorite feature of modern web browsers is tabs. I always organize my tabs from left to right. Mine go like this: SME, Leaf1, Leaf2, CUC, SBC1, SBC2….
It’s a good strategy when working in a complex environment with lots of clusters. Your mileage may vary – but this scheme works for me.
Yesterday afternoon I was on fire. We were blowing out a couple of clusters and re-building them. We had answer files, OVA’s, some fresh 420 (UCS-C420’s, hippie – what were you thinking?). When you add these together you can fire off 10 different windows at once and just watch all those babies come to life. Stop for a minute to migrate 8 sites to a new set of SBC’s, then – Oops – well, it looks like we added a few servers under System -> Server that won’t actually be added to the cluster for a few more days. I remembered someone once telling me that you shouldn’t leave servers configured in there if they’re not online, because you can end up with database corruption down the road that way.
So, I quickly deleted them. Fuck it – it’s a non-production cluster, right? Well, the CUCM tabs on the right are… OH SHIT! This is the tab on the LEFT!!!!