Cisco Phone Outage Solved!

Since early November of 2015, I’ve been contending with the strangest behavior from my Cisco infrastructure. Only a few number of Cisco IP Phones appear to go out around 7:30 am and then pop back to life right at 8 am. The phones fail and fix all by themselves, a very strange turn of events.

My first thought was that it was power related, so I started to debug and play around with POE settings. That wasn’t the answer. I then moved some phones from the common switch to a different switch nearby and the problem went away for the moved phones. That right there proved to me that it wasn’t the phones, and it wasn’t the Cisco CallManager. It had something to do with the switch itself. So I purified the switch, moved everything that wasn’t a Cisco IP Phone off it and the problem continued.

I eventually got in touch with Cisco support, and they suggested a two-prong effort, set SPAN up on the switch and run a packet capture there, and set SPAN up on a phone and run a packet capture there as well. The capture on the switch showed a switch in distress, many ugly lines where TCP was struggling. The phone capture was immense, and I got it opened up in Wireshark and went to the start of the days phone failure event. The minute I got to the start, 7:33:00.3 am the first line appeared. It was an ICMPv6 “Multicast Listener Report” packet. One of many that filled the rest of the packet capture. Millions of packets, all the same.

The multicast packets could explain why I saw the same traffic curves on every active interface. When a switch encounters a multicast packet, every active port responds as if the packet was sent to that port. As it turns out, once I extracted the addresses of where all these offensive packets were coming from, sorted the list, and dropped the duplicates I ended up with a list of four computers. I poked around a little bit more on Google and discovered to my chagrin that there was a specific Intel Network Interface Controller, the I217-LM, which was uniquely centered in this particular network flood scenario. I looked at the affected machines, and all of them were the same, HP ProDesk 600 G1 DM’s. These tiny computers replaced a good portion of our oldest machines when I first started at Stafford-Smith and I never even gave them a second thought. Each of these systems had this very Intel NIC in them, with a driver from 2013. The fix is listed as updating the driver, and the problem goes away. So that’s exactly what I did on the workstations that were the source of the multicast packet storm.

I can’t believe that anyone would design a NIC like this, where there is a possibility of a multicast flood, which is the worst kind of flood I think. All it takes is a few computers to start flooding and it sets off a cascade reaction that drags the switch to the ground.

We will see in the days to come if this solved the issue or not. This has all the hallmarks of what was going on and has filled me with a nearly certain hope that I’ve finally overcome this three-month headache.

Network Monitoring

I’m in the middle of a rather protracted evaluation of network infrastructure monitoring software. I’ve started looking at Paessler’s PRTG, also SolarWinds Orion product and in January I’ll be looking at Ipswitch’s products.

I also started looking at Nagios and Cacti. That’s where the fun-house mirrors start. The first big hurdle is no cost vs. cost. The commercial products mentioned before are rather pricey while Nagios and Cacti are GPL, and open sourced, principally available for no cost.

With PRTG, it was an engaging evaluation however I ran into one of the first catch-22’s with network monitoring software, that Symantec Endpoint Protection considers network scanning to be provocative, and so the uneducated SEP client blocks the poller because it believes it to be a network scanner. I ran into a bit of a headache with PRTG as the web client didn’t register changes as I expected. One of the things that I have come to understand about the cost-model network products is that each one of them appears to have a custom approach to licensing. Each company approaches it differently. PRTG is based on individual sensor, Orion is based on buckets, and I can’t readily recall Ipswitches design, but I think it was based on nodes.

Many of these products seem to throw darts at the wall when it comes to their products, sometimes hit and sometimes miss. PRTG was okay, it created a bumper crop of useless alarms, Solarwinds Orion has an exceptionally annoying network discovery routine, and I haven’t uncorked Ipswitch’s product yet.

I don’t know if I want to pay for this sort of product. Also, it seems that this is one of those arrangements that if I bite on a particular product, I’ll be on a per-year budget cost treadmill for as long as I use the product unless I try the no-cost options.

This project may launch a new blog series, or not, depending on how things turn out. Looking online didn’t pan out very much. There is somewhat of a religious holy war surrounding these products. Some people champion the GPL products; other people push the solution they went with when they first decided on a product. It’s funny but now that I care about the network, I’m coming to the party rather late. At least, I don’t have to worry about the hot slag of “alpha revision software” and much of the provider space seems quite mature.

I really would like anyone who works in the IT industry to please comment with your thoughts and feelings about this category if you have any recommendations or experiences. I’m keenly aware of what I call “show-stopper” issues.

Archiving and Learning New Things

As a part of the computing overhaul at my company, each particular workstation that we overhauled had its user profile extracted. This profile contains documents, downloaded files, anything on the Desktop, that sort of information. There never really was any centralized storage until I brought a lot of it to life, later on, so many of these profiles are rather heavy with user data. They range all the way up to about 144 gigabytes each. This user data primarily just serves as a backup, so while it’s not essential for the operation of the company, I want to keep as much as I can for long-term storage and maximally compress it.

The process started with setting up an Ubuntu server on my new VMWare Host and giving it a lot of RAM to use. Once the Ubuntu server was established, which on its own took a whole five minutes to install, I found a version of the self-professed “best compression software around” 7zip and got that installed on the virtual Ubuntu server. Then I did some light reading on 7zip and the general rule of thumb appears to be “throw as much as you can at it and it will compress better”, so I maxed out the application with word size, dictionary size, the works. Then started to compress folders containing all the profile data that I had backed up earlier. Throwing 144 gigabytes of data at 7zip when it’s maxed out takes a really long time. Then I noticed the older VMWare cluster and realized that nothing was running on that so for its swan song I set up another Ubuntu server and duplicated the settings from the first one on the second one and pressed that into service as well.

I then thought about notification on my phone when the compression routine was done, but by the time I had thought about it, I had already started the 7zip compressor on both servers. Both of these were far enough along where I didn’t want to cancel either operation and lose the progress I had made compressing all these user profiles. I am not a Bash Shell expert so it took a little digging around to find that there already was a way, temporarily, to freeze an application and insert more commands after it so that when the first application completes, the next application will go immediately into operation. You use Control-Z, which freezes the application and then the command “bg %1 ; wait %1 ; extra command”. Then I thought about how I’d like to be notified and dug around for some sort of email method. None of these servers that I put together had anything at all in the way of email servers and I really wasn’t keen on screwing around with postfix or sendmail. I discovered a utility called ssmtp which did the trick. Once I configured it for use with my workplace Office365 account and did some testing, I had just the thing that I was looking for. I stopped the application on both servers doing the compression and inserted the email utility to the end of the application finishing. When the compression is done, I will be emailed.

All in all, quite nifty and it only took a few minutes to set up. Once I’m done with this particular task, I can eliminate the “junky” Ubuntu server altogether on the old VMWare host and trim back the Ubuntu server running on my new VMWare host. I quite love Ubuntu, it’s quick and easy, set up what you want, tear it down when you don’t need it anymore, or put the VMWare guest on ice as an appliance until you do need it sometime later. Very handy. Not having to worry about paying for it or licensing it is about as refreshing as it can get. I just need something to work a temporary job, not a permanent solution. Although considering how much malware is out there, the breakpoint between the difficulty-to-use for end users in Linux may eventually give way to the remarkable computing safety of using Linux as a primary user workstation operating system. There is still a long while before Linux is ready for end-user primetime. I sometimes wonder what it will take for the endless vulnerabilities of Windows to break Microsoft. Hope springs eternal!

Peer to Peer File Transfer, Reep.io

I recently needed to move about ten gigabytes of data from me to a friend and we used a new website service called reep.io. It’s quite a neat solution. It relies on a technology that has exists in many modern browsers, like Chrome, Firefox, and Opera called WebRTC.

The usual way to move such a large set of data from one place to another would probably best be mailing a USB memory stick or waiting to get together and then just sneaker-net the files from one place to another. The issue with a lot of online services that enable people to transfer files like this is that many of them are limited. Most of the online offerings cap out at around two gigabytes and then ask you to register either for a paid or free account to transfer more data. Services like Dropbox exist, but you need the storage space to create that public link to hand to your friend so they can download the data, plus it occupies the limited space in your Dropbox. With reep.io, there is no middleman. There are no limits. It’s browser to browser and secured by TLS. Is that a good thing? It’s better than nothing. The reason I don’t like any of the other services, even the free-to-use-please-register sites is because there is always this middleman irritation in the way, it’s inconvenient. Always having to be careful not to blow the limit on the transfer, or if it’s a large transfer like ten gigabytes, chopping up the data into whatever bite-sized chunk the service arbitrarily demands is very annoying.

To use this site, it’s dead simple. Visit reep.io, and then either click and drag the file you want to share or click on the File Add icon area to bring up a file open dialog box and find the file you want to share. Once set, the site generates a link that you can then send to anyone you wish to engage with a peer-to-peer file exchange. As long as you leave your browser running, the exchange will always work with that particular link. You don’t need any extra applications, and it works across platforms, so a Windows peer can send a file to a Mac client, for example. That there is no size limit is a huge value right there.

If you have a folder you want to share, you can ZIP it up and share that file. It’s easy to use, and because there are no middlemen, there aren’t any accounts to create, and thanks to TLS, nobody peeping over your shoulder.

Shifting Platforms

I go through cycles of having an interest, and then not having an interest in social media. Twitter and Facebook are the core services that I’m thinking about here. Amongst these services, I’ve given up on Twitter. I no longer engage with anyone in Twitter and the leading edge of loud, noisy chatter has carried on without me. If I do run the Twitter application, it’s mostly to witness some event as it unfolds, like a news source, or to jump on some shame bandwagon when a public figure makes a terrible mess of their lives by saying or doing something stupid.

I am about to give up on Facebook as well. There are many reasons for this renewed effort to leave the system. I am tired of the see-saw polarity between stories. The negative political stories mixed in with the positive reaffirming stories build up a kind of internal mental noise that clouds my day and keeps me from being focused. Another reason to leave is the interface has become somewhat moribund on its own. You can sometimes comment, sometimes not. The only option to express your reactions when it comes to feelings is “Like” and the entire service has become self-balkanized. I have friends and family on Facebook, but out of all of them, I only follow a few and I’ve muted the rest. I don’t really miss the engagement, but always having to think about tailoring my thoughts based on the audience has started to give me fatigue.

I think then that it may be time for me to go back to writing blog posts on my WordPress blog. The blog encourages longer format writing, and I expect that engagement will drop as I won’t be using Facebook. In a lot of ways, it is a kind of social addiction and the only way to break it is to wean off of it. Perhaps cold turkey is not right, but rather cool turkey.

I don’t expect anyone to follow me off of Facebook. I will share my blog posts to Facebook so people can still see what I write, but the engagement will drop off. Feel free to comment on my blog if you wish. Otherwise, that will be that.

On a more technical note, I changed how the stories are shared across systems. The original way was to publish a WordPress entry, which would share to Tumblr, and that would then share to Twitter and Facebook. I have torn that down and set it so that WordPress itself shares to Facebook, Google Plus, Tumblr, and Twitter. It’s a more direct path that doesn’t require people to slog through my Tumblr. I think it’s more direct this way.

Thanksgiving 2015

Tis the season for us to unpack all the holiday crazy that comes with the post-Halloween holiday adventure. Thanksgiving and Christmas. Cooking, planning, setting up, and a lot of decking of the halls!

So we start with Thanksgiving. Weeks ago we took advantage of the 50% discount deal at our local supermarket and made room for the frozen Turkey in our basement fridge. Then we slowly accumulated all the other ingredients to our “feeding an army for two people” style of Thanksgiving. On that Monday, November 23rd. I caught a little video from a television and network cooking personality, Mr. Alton Brown. He recommended that people could defrost and brine a turkey at the same time. So I had a frozen Turkey in my freezer and I had never brined a Turkey before and didn’t know how it would turn out. Following Mr. Browns advice, I hauled out the twenty-pound bird and found that my biggest stock pot fit it like a glove. The directions couldn’t have been more direct and simple. Strip the Turkey of it’s webbing and plastic wrap, then put a cup of Kosher Salt in the vessel along with 2L of hot tap water in the vessel and stir until the salt is dissolved. Then add 4L more cold water to the vessel and then put the turkey in. I put it so that the main cavity was pointed up at me, so as I added more water (water to fill all the way around the turkey) it wasn’t going into the cavity, so I poured into the cavity until the entire bird was submerged. Then I wrapped the top in plastic wrap and put it in the basement, behind locked doors. No refrigeration required! As the turkey defrosted itself, it also brined itself. When I temped out the bird two days later it was at about 45 degrees and then I stowed it in the fridge until we were ready to cook it. When I was set, I poured the water off and then rinsed it with fresh cold tap water, all the cavities and everything. Then I put it in the roasting pan.

The oven was set at 350 degrees, however, it was running hot for about twenty minutes, so the first shot was at about 400 degrees. I knew something wasn’t right because the turkey was making a lot of snap, crackle, and pop noises. When I checked the temperature I noticed the temperature disparity and corrected the dial, which brought the oven back into calibration.

There were two competing schools of thought during the cooking process. The first one was that I had accidentally turned our turkey into Lot’s Turkey, a solid pillar of salt. The other school was “it defrosted and it didn’t amount to crap.” and that the salt was pretty much just a silly affectation. I held out hope, mostly because of the sage words of Mr. Brown, whom I trust when it comes to food preparation and cooking.

We were a little taken aback when the temperature probe indicated that every part of the turkey had reached about 170 degrees, it was well and truly done. I asked, “How much juice is in the pan?” and the answer was “Not very much, if any. Only what it was basted with.” We had made enough of our own with the basting juices made with turkey broth concentrate and sauteeing the neck. I let the turkey settle for about ten minutes and then carved into it.

The meat was so moist and juicy that it fell apart as I carved into it. The entire dinner was spent marvelling at just how amazing it all was and how we’ll never do a turkey any other way than this. So simple, a saltwater bath for three days changes so much about a turkey! And just like Mr. Brown promised, the brine really shines for leftovers. The turkey is usually tough and dry as cardboard by the time its leftovers, but with the brined turkey it is nearly as amazing each time we take little out of the fridge for dinner it’s still amazing!

I can’t understand why everyone doesn’t brine their turkey. We’ll brine ours from now on, fresh and leftovers are just the tip of how amazing this is. The turkey probably was fully thawed in a little over a day! The three days just added to the brine’s power to make the bird juicy and amazingly flavorful.

Just for the record, the turkey wasn’t related to Lot at all, it wasn’t salty. It was amazing.

Weak Certificates

I’ve got an odd little problem at work. I’ve got a Ricoh copier in the Traverse City office that I apparently now can no longer manage remotely due to an error in SSL. The error that Firefox throws is ssl_error_weak_server_cert_key and in Google Chrome it’s ERR_SSL_WEAK_SERVER_EPHEMERAL_DH_KEY. In both situations I understand what the issue is, that the SSL layer is weak because the Diffie-Hellman key is not big enough.

I’ve run into this issue before, mostly with self-signed certs and the browsers have usually allowed me to click on an exception and get on with my day. Except for Firefox and Chrome now, that is no longer the case. The browsers just refuse to display the webpage. I understand the logic behind it, everyone wants a more secure web, but sometimes what we are really after isn’t privacy or security, but rather just getting our work done.

I still need to connect to this copier and manage it, and frankly my dear, I don’t really care that much that the transactions be secure. In a way, this security is irrelevant. The traffic on our WAN is flowing over a Meraki VPN site-to-site link, so it’s already secure. This is security on top of security, and it’s in the way.

So I thought about using the awful Internet Explorer for this and I chafe at even considering using one more wretched bit of Microsoft technology – there has to be a better solution. So when you run into little bits like this the best way forward is to pursue my favorite solution, heterogenous computing! There’s more than one way to get what you are after. So if Firefox and Chrome won’t work, and Internet Explorer is unthinkable, how about Opera?

So I downloaded Opera and installed it. Then browsed to my copier in Traverse City. Opera told me about the error, but it also provided me with an exception button and then once I clicked that, the error was bypassed and my copiers remote management screen appeared.

So now I’ll add Opera to all the other browsers I have on my computers. The answer is competition. I wonder sometimes if there isn’t a special browser out there for IT type people like me. They’ll render anything, ignore any “privacy or security” type errors, all so people like me can get our jobs done. For now, Opera seems to lead the pack, at least for this. Thank you Opera!

Sandboxing FTW

After I reminded people that I offer a complimentary attachment checking service through my office I got a submission from one of our warehouse operators in Texas. It was an oddly-named attachment called UmjSJCk.zip. I saved it to my Mac and opened Terminal. Then unpacked the zip file and it unpacked to Quotation.exe. I giggled a bit when I ran the file command on it and saw that it was a Windows executable. Exactly what I expected. So I put it in a folder called sandbox and started my copy of Windows XP that I have in VirtualBox. The OS has it’s hard drive set to immutable, so any changes or write activities that the OS does is not sent to the VHD image, but rather to a “snapshot” VHD image on the side. Each time I start the OS, it’s as if I am starting it for the first time, because when an immutable VM finds something (anything) in the snapshot folder, it dumps it first then creates a new snapshot image for writes. I make sure the sandbox can’t see anything beyond my Mac by assigning it’s LAN connection as a Host-Only Adapter. That means that the VM can only see VirtualBox’es fake network host and nothing else.

So start this sandbox Windows XP, mount the sandbox folder as a drive to the sandbox – set as Read Only also, by the way, no baby-backwash here… and then double-clicked on Quotation.exe. It loaded a process and started to grope the network connection. Of course it did. So, with the bug trying it’s best to reach out and fetch it’s payload I clicked on the little red close control and told VirtualBox to power off the virtual machine.

Poof. All gone. Changes and everything. Then I dumped the sandbox contents.

I think whats more concerning here is that my scan using ClamAV on my Mac in regards to this data showed no infected data. Well, it certainly was trying to be nasty.

Then I start to wonder about the inherent usefulness of VirtualBox when it comes to airgapped computing when it comes to privacy and really being paranoid about encryption. But then I realize that when I turn off my Airport on my MBP, that it’s just as good as anything I could screw around with in VirtualBox. An infection in my MBP? Heh… piff.

Geek Excursions: BitMessage

Along with my curiosity surrounding Bitcoin, there is a similar technology that has been released for public use called BitMessage. This system is a really neat way to securely communicate in a secure method that involves absolutely no trust whatsoever. It’s a completely decentralized email infrastructure and has captured a lot of my spare attention. BitMessage works a lot like how Bitcoin does, you can create email addresses on the fly, they are a long sequence of random characters that your system can display because you have both a public key and a private key. In a lot of ways BitMessage deals with the biggest problem surrounding PGP/GPG, which is key management. Nobody really wants to manage keys or use the system because it’s extra work. Plus even with PGP/GPG, your identity is written on your keys for everyone to see.

Getting started with BitMessage is a snap. First you need to download the BitMessage client, and you can get that at bitmessage.org. There’s a Windows and Mac client available, you can start it and be instantly attached to the BitMessage network, ready to create new “BitMessage Addresses” and throw them away just as easily. So, for example, you could reach me by sending me a BitMessage to this address: BM-2cWAk99gBxdAQAKYQGC5Gbskon21GdT29X. When you send a message using BitMessage, its to this address and from an address that your client makes, so the conversation occurs securely and since every node has a copy of the data it’s impossible to tell who is getting what information. I think an even more secure method would be to cross BitMessage with a PGP/GPG key. The only problem with a key like that is that classically PGP/GPG keys require that you include your email address as a subkey so that you can be identified by a human-readable email address when looking for your public key or when someone else is looking for it, to verify a signature for example. The PGP/GPG system doesn’t require an email address, you can of course create a public and private keypair using PGP/GPG and make the email address up from whole cloth, and instead just let people know the key ID that you want them to use. So technically if Alice wanted to secretly communicate with me, we could give each other our public keys to start and then use BitMessage as the messaging mule. I don’t see how any eavesdropper could make sense out of any of that data flow. It’s unclear what the contents are, the PGP/GPG encryption keeps the contents of the message secure, and BitMessage itself seriously obfuscates if not outright eliminates being able to tell where the messages are ultimately going to or coming from.

I have to admit that BitMessage is very user friendly and very handy to have. My only issue with it is that I don’t know anyone who uses it, but perhaps this blog post will change that. If you are interested in this bleeding-edge crypto/privacy software, I encourage you to chat me up on BitMessage for serious matters or for fun.