Apple’s Activation Lock

open-159121_640I just spent the last hour bashing my head against Apple’s Activation Lock on a coworkers iPad 2. They brought it to me because it had nearly every assistive mode option turned on, and it was locked with an unknown iCloud account. I tried to get around the lock to no avail, even to return the device to factory specifications. Even the factory reset ends up crashing into the Activation Lock.

It’s heartening to know that Activation Lock took the guts out of the stolen devices market for Apple mobile devices, but in this particular case it’s creating a huge headache. There is no way for me to move forward with treating this issue because the iPad only refers to its owner by a guesstimate email address, b******@gmail.com. I don’t know what this is, and there is no way for me to figure it out. So this device is pretty much bricked, and I have no choice but to send the user directly to an Apple store with the instructions to throw the device on their mercy.

If you are going to give away or sell your Apple device, make sure you TURN OFF ACTIVATION LOCK. There is no way, not even DFU-mode or Factory Reset that can defeat the lock. There are some hacks that used to work, but Apple catches on quickly and updates their iOS to close each possible hack soon after it appears.

I don’t pitch a fight with Apple over this, it was a clear and present requirement that they met, it just makes dealing with this particular issue impossible for people like me to resolve. The best way around this issue is to secure each and every device with an iCloud account and write the iCloud username and password down in a very legible and memorable safe place! Without the iCloud account details or a trip to the Apple Store, the device is so much plastic, metal, and glass.

Vexatious Microsoft

Microsoft never ceases to bring the SMH. Today I attempted to update a driver for a Canon 6055 copier here at the office. The driver I had was a dead duck, so out to get the “handy dandy UFR II driver”. I downloaded it, noted that it was for 64-bit Windows 2012 R2 server and selected it. Then I went to save it, and this is the error that greets me:

Capture
“Printer Properties – Printer settings could not be saved. This operation is not supported.”

So, what the hell does this mean? Suddenly the best and the brightest that Microsoft has to offer cannot save printer settings, and saving of printer settings is an operation that is not supported. Now step back and think about that for a second, saving your settings is not supported.

The error is not wrong, but it is massively misleading. The error doesn’t come from the print driver system but rather from the print sharing system. That there is no indication of that is just sauce for the goose. What’s the fix? You have to unshare the printer on the server, and then update the driver, and then reshare the printer. The path is quick, just uncheck the option to share from the neighboring tab, go back, set your new driver, then turn sharing back on. It’s an easy fix however because the error is not written properly, you don’t know where to go to address it. A more elegant system would either tell you to disable sharing before changing drivers or because you are already sharing and trying to install a new driver, programmatically unshare, save the driver, then reshare. Hide all of this from the administrator, as you do. That’s not what Microsoft does; they do awkward and poorly stated errors leading you on a wild goose chase.

But now I know, so that’s half the battle right there. Dumb, Microsoft. So Dumb.

Network Monitoring

I’m in the middle of a rather protracted evaluation of network infrastructure monitoring software. I’ve started looking at Paessler’s PRTG, also SolarWinds Orion product and in January I’ll be looking at Ipswitch’s products.

I also started looking at Nagios and Cacti. That’s where the fun-house mirrors start. The first big hurdle is no cost vs. cost. The commercial products mentioned before are rather pricey while Nagios and Cacti are GPL, and open sourced, principally available for no cost.

With PRTG, it was an engaging evaluation however I ran into one of the first catch-22’s with network monitoring software, that Symantec Endpoint Protection considers network scanning to be provocative, and so the uneducated SEP client blocks the poller because it believes it to be a network scanner. I ran into a bit of a headache with PRTG as the web client didn’t register changes as I expected. One of the things that I have come to understand about the cost-model network products is that each one of them appears to have a custom approach to licensing. Each company approaches it differently. PRTG is based on individual sensor, Orion is based on buckets, and I can’t readily recall Ipswitches design, but I think it was based on nodes.

Many of these products seem to throw darts at the wall when it comes to their products, sometimes hit and sometimes miss. PRTG was okay, it created a bumper crop of useless alarms, Solarwinds Orion has an exceptionally annoying network discovery routine, and I haven’t uncorked Ipswitch’s product yet.

I don’t know if I want to pay for this sort of product. Also, it seems that this is one of those arrangements that if I bite on a particular product, I’ll be on a per-year budget cost treadmill for as long as I use the product unless I try the no-cost options.

This project may launch a new blog series, or not, depending on how things turn out. Looking online didn’t pan out very much. There is somewhat of a religious holy war surrounding these products. Some people champion the GPL products; other people push the solution they went with when they first decided on a product. It’s funny but now that I care about the network, I’m coming to the party rather late. At least, I don’t have to worry about the hot slag of “alpha revision software” and much of the provider space seems quite mature.

I really would like anyone who works in the IT industry to please comment with your thoughts and feelings about this category if you have any recommendations or experiences. I’m keenly aware of what I call “show-stopper” issues.

Archiving and Learning New Things

As a part of the computing overhaul at my company, each particular workstation that we overhauled had its user profile extracted. This profile contains documents, downloaded files, anything on the Desktop, that sort of information. There never really was any centralized storage until I brought a lot of it to life, later on, so many of these profiles are rather heavy with user data. They range all the way up to about 144 gigabytes each. This user data primarily just serves as a backup, so while it’s not essential for the operation of the company, I want to keep as much as I can for long-term storage and maximally compress it.

The process started with setting up an Ubuntu server on my new VMWare Host and giving it a lot of RAM to use. Once the Ubuntu server was established, which on its own took a whole five minutes to install, I found a version of the self-professed “best compression software around” 7zip and got that installed on the virtual Ubuntu server. Then I did some light reading on 7zip and the general rule of thumb appears to be “throw as much as you can at it and it will compress better”, so I maxed out the application with word size, dictionary size, the works. Then started to compress folders containing all the profile data that I had backed up earlier. Throwing 144 gigabytes of data at 7zip when it’s maxed out takes a really long time. Then I noticed the older VMWare cluster and realized that nothing was running on that so for its swan song I set up another Ubuntu server and duplicated the settings from the first one on the second one and pressed that into service as well.

I then thought about notification on my phone when the compression routine was done, but by the time I had thought about it, I had already started the 7zip compressor on both servers. Both of these were far enough along where I didn’t want to cancel either operation and lose the progress I had made compressing all these user profiles. I am not a Bash Shell expert so it took a little digging around to find that there already was a way, temporarily, to freeze an application and insert more commands after it so that when the first application completes, the next application will go immediately into operation. You use Control-Z, which freezes the application and then the command “bg %1 ; wait %1 ; extra command”. Then I thought about how I’d like to be notified and dug around for some sort of email method. None of these servers that I put together had anything at all in the way of email servers and I really wasn’t keen on screwing around with postfix or sendmail. I discovered a utility called ssmtp which did the trick. Once I configured it for use with my workplace Office365 account and did some testing, I had just the thing that I was looking for. I stopped the application on both servers doing the compression and inserted the email utility to the end of the application finishing. When the compression is done, I will be emailed.

All in all, quite nifty and it only took a few minutes to set up. Once I’m done with this particular task, I can eliminate the “junky” Ubuntu server altogether on the old VMWare host and trim back the Ubuntu server running on my new VMWare host. I quite love Ubuntu, it’s quick and easy, set up what you want, tear it down when you don’t need it anymore, or put the VMWare guest on ice as an appliance until you do need it sometime later. Very handy. Not having to worry about paying for it or licensing it is about as refreshing as it can get. I just need something to work a temporary job, not a permanent solution. Although considering how much malware is out there, the breakpoint between the difficulty-to-use for end users in Linux may eventually give way to the remarkable computing safety of using Linux as a primary user workstation operating system. There is still a long while before Linux is ready for end-user primetime. I sometimes wonder what it will take for the endless vulnerabilities of Windows to break Microsoft. Hope springs eternal!

Trials

A major Fortune 500 company has a world-renowned hiring trial for their new IT staff. There are all the usuals, the resumes, the interviews, but there is also a fully funded practical trial as part of the job application process. The job itself is cherry, practically autonomous, with real challenges and true financial backing so the winner can dig in and achieve serious results.

The trial is rather straightforward, given a property address, you must approach, perform an intake procedure to discover what is required and then plan and execute whatever is needed to solve the IT need.

The property has one person, a newly hired young woman who is sitting at a central desk on the ground floor. She has a folder, within it, a script that she reads to each candidate:

“Welcome to your trial, this building has everything required to run a branch of our company. Every computer, networking component, and server component is placed and wired properly. Your task is to configure all the equipment throughout the branch properly. You will find all the resources you need to complete this task within the building. You have one week to complete this task. Good Luck.”

The young woman then folds her hands together and waits.

Several candidates engage with the trial, hoping to get the cherry job and have learned about the young lady at the reception desk. They pass all the requirements, and they eagerly arrive to try their hand at the trial. They impatiently sit through her canned speech and quickly head off to the basement to start in the server room.

Candidates come and go, some pass and some fail. The trial is to get the branch fully operational and on the last day of the week the branch becomes staffed, and the candidate must ensure that all the preparations are in place and that everyone can work without a technological failure. The trial is winnable but very arduous.

The young lady sitting at the central desk on the ground floor has a secret. She has a shoebox locked in a drawer attached to her desk and around her neck is a key on a golden necklace. She has specific instructions, which if a candidate approaches her and engages pleasantly and shows sincere interest in her role in the branch without being the destination of a last-ditch effort, she is to pause the conversation, unlock the desk and produce the shoebox to the candidate. Within the shoebox is the answer to the trial, it is every specific requirement written in clear, actionable text with a memory stick containing every proper configuration and a full procedure list that will bring the branch to full operation without a single hiccup. Everything from networking configurations to the copier codes for the janitorial staff is covered and once executed virtually guarantees a win.

How many people would simply ignore the receptionist and get cracking on the trial and how many would take their time to get to know everyone and their roles in that particular branch? Either kind of candidate can win, either through a sheer act of will or simply being kind, careful, and honestly interested in the welfare of each of their coworkers. Nobody knows about the secret key, but sometimes the answer you need comes from a place you would never expect.

Peer to Peer File Transfer, Reep.io

I recently needed to move about ten gigabytes of data from me to a friend and we used a new website service called reep.io. It’s quite a neat solution. It relies on a technology that has exists in many modern browsers, like Chrome, Firefox, and Opera called WebRTC.

The usual way to move such a large set of data from one place to another would probably best be mailing a USB memory stick or waiting to get together and then just sneaker-net the files from one place to another. The issue with a lot of online services that enable people to transfer files like this is that many of them are limited. Most of the online offerings cap out at around two gigabytes and then ask you to register either for a paid or free account to transfer more data. Services like Dropbox exist, but you need the storage space to create that public link to hand to your friend so they can download the data, plus it occupies the limited space in your Dropbox. With reep.io, there is no middleman. There are no limits. It’s browser to browser and secured by TLS. Is that a good thing? It’s better than nothing. The reason I don’t like any of the other services, even the free-to-use-please-register sites is because there is always this middleman irritation in the way, it’s inconvenient. Always having to be careful not to blow the limit on the transfer, or if it’s a large transfer like ten gigabytes, chopping up the data into whatever bite-sized chunk the service arbitrarily demands is very annoying.

To use this site, it’s dead simple. Visit reep.io, and then either click and drag the file you want to share or click on the File Add icon area to bring up a file open dialog box and find the file you want to share. Once set, the site generates a link that you can then send to anyone you wish to engage with a peer-to-peer file exchange. As long as you leave your browser running, the exchange will always work with that particular link. You don’t need any extra applications, and it works across platforms, so a Windows peer can send a file to a Mac client, for example. That there is no size limit is a huge value right there.

If you have a folder you want to share, you can ZIP it up and share that file. It’s easy to use, and because there are no middlemen, there aren’t any accounts to create, and thanks to TLS, nobody peeping over your shoulder.

Killing SpotifyWebHelper

I’ve had a problem with Spotify for a while now on my Mac. The damn program opens up spontaneously all by itself unbidden. What’s really annoying is that it also frequently auto-starts and auto-plays tracks I didn’t want to play.

I found out that when I start my Mac, or start Spotify itself, there is another application which is automatically started called SpotifyWebHelper.

I’ve noticed that when I go into Activity Monitor and kill this app, the unwanted automatic start and play problem goes away. That’s good, but it’s not really the answer. The answer is to murder SpotifyWebHelper.

So I turned to the CLI, you can issue the command killall spotifywebhelper and press enter. That does kill it, but what I want is to prevent it from ever being run. So I unloaded it from launchctl and deleted it’s LaunchAgent .plist file. When Spotify starts, it puts it all back.

Then I went where SpotifyWebHelper is located and renamed it. Spotify repairs this as well. Then I tried to set the SpotifyWebHelper application in ~/Library/Application Support/Spotify so that it had no posix rights whatsoever by chmod a-rwx SpotifyWebHelper. The next time you run Spotify, it fixes it all by itself.

This is less of a feature and more of a virus. A zombie virus, you just can’t kill it.

But I have killed it for good, and here is how to be free of SpotifyWebHelper:

  1. Quit Spotify
  2. Open Terminal, killall SpotifyWebHelper
  3. cd ~/Library/Application Support/Spotify
  4. rm SpotifyWebHelper
  5. cd ..
  6. chmod a-w Spotify
  7. Close Terminal, done!

After that, you will be free of the horrible SpotifyWebHelper bullshit and Spotify won’t automatically run and play things you don’t want it to.

HP Pavilion Boot Loop Problem

Yesterday I ran into a devil of a time with a HP Pavilion slimline workstation at work. This machine was beyond it’s warranty with HP, so no help from them. I had a machine that presented these symptoms:

  • Computer powers up normally.
  • All BIOS-level diagnostics pass.
  • No error codes or beep codes whatsoever.
  • Once the HP BIOS Splash screen fades, the computer should boot into Windows. In this case, Windows 7. It does not. The computer reboots into the HP BIOS Splash screen. Ad infinitum.
  • You can enter BIOS Setup, you can also access the Boot Menu to select other boot sources, however the F11 key to start System Restore is unresponsive.
  • All first-tier efforts to clear the error were taken. BIOS reset to factory conditions, as well as holding down the power button to clear the power supply controller. None of these resolved the issue.

I then plugged in a copy of Knoppix that I downloaded and installed on a USB memory stick. I could have also burned the ISO file to a DVD and used that as well, but the USB was handy. When I use Knoppix this way, I like to enter this “Knoppix Startup Cheatcode” into the prompt right after it boots: “knoppix 2” (without quotes, of course) and this starts the Knoppix system in  the INIT 2 run level, which is single-mode text only interface. I don’t need X-Windows, and in this case, that just gets in the way.

Once at the CLI for Knoppix, I figured the boot flag, the boot manager, or the MBR was shot for the primary partition on the hard drive in the machine. Diagnostics indicated that the primary hard drive was fine, so it wasn’t a physical failure in the HD. I knew that the first (and only) hard drive in systems like these were most likely /dev/sda, you could search the “dmesg” log if you have doubt on where in the /dev the primary hard drive is. Knoppix has the “fdisk” command, so that was my next stop. I knew that this particular HP machine had a Windows Recovery partition stuffed in it, so when I started “fdisk” I displayed the partition map and there were three partitions: /dev/sda1, /dev/sda2, and /dev/sda4. I looked at the sizes and figured that the biggest one was the damaged partition, the middle one was probably for swap or scratch or something, and the last one seemed sized properly for the recovery partition. Honestly it was a guess. I turned the bootable flag on for /dev/sda4 and then off for /dev/sda1, then wrote the partition map to disk and then issued the command “shutdown -r now” to reboot out of Knoppix. Technically you could have just unplugged the machine, but I’m a big fan of orderly shutdowns even when the consequences are irrelevant – it’s a good habit to have.

The machine booted to the HP BIOS Splash screen, and then Windows Recovery started. Once the recovery partition got going I noticed a cutesy HP menu appeared offering me a selection of options. I started out with the simplest option which was something like “Microsoft Windows Boot Recovery” and it ran for maybe a second and then offered to reboot. I went for the reboot and that fixed the issue. Windows started but instead of a regular startup it went to the recovery menu, which I found fine since that was where I was going to go anyways by pounding the F8 button like a madman. I selected “Safe Mode With Networking” and then plugged in my USB memory stick containing TRON and got TRON working on the system.

Once TRON was done, I rebooted and let chkdsk naturally freak out about the structure of the NTFS partition in /dev/sda1. Chkdsk did what it had to do, and the system booted normally. I then set it for redeployment.

I figure if anyone else has this issue, this blog post might be helpful. If it helped you out, and you’re willing, maybe dropping a wee tip in Bitcoin or Dogecoin would definitely be appreciated.

Sandboxing FTW

After I reminded people that I offer a complimentary attachment checking service through my office I got a submission from one of our warehouse operators in Texas. It was an oddly-named attachment called UmjSJCk.zip. I saved it to my Mac and opened Terminal. Then unpacked the zip file and it unpacked to Quotation.exe. I giggled a bit when I ran the file command on it and saw that it was a Windows executable. Exactly what I expected. So I put it in a folder called sandbox and started my copy of Windows XP that I have in VirtualBox. The OS has it’s hard drive set to immutable, so any changes or write activities that the OS does is not sent to the VHD image, but rather to a “snapshot” VHD image on the side. Each time I start the OS, it’s as if I am starting it for the first time, because when an immutable VM finds something (anything) in the snapshot folder, it dumps it first then creates a new snapshot image for writes. I make sure the sandbox can’t see anything beyond my Mac by assigning it’s LAN connection as a Host-Only Adapter. That means that the VM can only see VirtualBox’es fake network host and nothing else.

So start this sandbox Windows XP, mount the sandbox folder as a drive to the sandbox – set as Read Only also, by the way, no baby-backwash here… and then double-clicked on Quotation.exe. It loaded a process and started to grope the network connection. Of course it did. So, with the bug trying it’s best to reach out and fetch it’s payload I clicked on the little red close control and told VirtualBox to power off the virtual machine.

Poof. All gone. Changes and everything. Then I dumped the sandbox contents.

I think whats more concerning here is that my scan using ClamAV on my Mac in regards to this data showed no infected data. Well, it certainly was trying to be nasty.

Then I start to wonder about the inherent usefulness of VirtualBox when it comes to airgapped computing when it comes to privacy and really being paranoid about encryption. But then I realize that when I turn off my Airport on my MBP, that it’s just as good as anything I could screw around with in VirtualBox. An infection in my MBP? Heh… piff.

Better Credit Card Security

While talking with a friend, who is enduring some unpleasantness the conversation turned to issues with using credit cards to buy things, like food for example. That got me thinking, how would I design a really strong way to prevent data breaches?

Encrypt everything!

Well, perhaps not that, but hash everything. Here’s what I talked myself into, of course none of this is rational because nobody will effect a planetwide shift in payment processing based on what this yokel has to say, but still, here goes.

Issuing Bank sets up credit account, there are four key fields that are important for the classic transaction, name, number, expiration date, and CVV2. I think one could also establish a timebased one-time-password secret as well, it would operate like Google Authenticator functions. So you’d need a secret that the bank generated for their systems and the physical card too. You’d need a smart chip on the card so it could forward the TOTP code to the credit terminal at the point of sale.

The bank sets up a TOTP secret, so it’s named JQP Credit Card (or account number or whatever) and the secret is: 6B57078FB88A4DD73E447D2647DCEC7D04C3D887951BA6A2D8DBA294E0B60579. This number is forwarded to the credit card terminal. Right now it’s 726995, but in thirty seconds it’ll be something else. Since the credit card terminal and the bank share sync’ed time via time.nist.gov, there is no risk that there would be some sort of mismatch between the two.

The customer goes to the credit card terminal and swipes, a value is entered and a timestamp is recorded, all of this is already parts of a credit transaction. The terminal can read the name, expiration, CVV2, whatever from the magnetic stripe and the smart chip forwards the TOTP code, then the terminal assembles this into a EDI transaction:

JOHN/Q/PUBLIC#1111222233334444#1015#170#726995 and applies SHA256 to it, to create:

621d3dd5a66277a7ab3737f306728e3c4bc5f3cd20c8730c37cc61c6575de0ba

This is stored in a database and then forwarded to the bank with the timestamp, so it’ll look like this:

987654321#621d3dd5a66277a7ab3737f306728e3c4bc5f3cd20c8730c37cc61c6575de0ba#15.09#1426615839

So the bank will be presented with a Customer ID, SHA-256, they’ll have the total dollar amount, and they’ll have Epoch time, or the number of seconds from 00:00:00 UTC, January 1, 1970. This could be easily done by a Linux kernel by the output of date -j -f “%a %b %d %T %Z %Y” “date” “+%s”

The bank would then have everything they need, they’d have the secret key, which with the Epoch time from the transaction would give them the TOTP calculation, which would generate the answer 726995. Then they’d have the card details from the customer ID, the SHA-256, and the amount. They could then calculate the hash on their own:

621d3dd5a66277a7ab3737f306728e3c4bc5f3cd20c8730c37cc61c6575de0ba

And authorize the transaction.

Even if the card details were stolen by someone copying the numbers off the card, they wouldn’t get the TOTP secret. Plus the TOTP secret is changing every 30 seconds. If someone tried to run this transaction and guessed at the TOTP code, they’d generate this:
987654321#a1b714fba988632200c78a5b9021bca5b48f149b036aa901c03173f0f2de5399#15.09#14266158 and the bank would instantly detect this incorrect SHA hash and cancel the card and ship a new one.

This is rather involved but the practical upshot is, if a vendor kept these transactions in a database and someone stole the database to use for their own nefarious needs, the presence of the TOTP and SHA-256 would make the data in the database worthless because the TOTP has no predictable pattern if you don’t know the secret, and SHA-256 is very sensitive to even the smallest change in the input data that it’s hashing. This would free vendors, banks, and customers from risking PII leakage or identity theft.

I’ve also thought that this would be a great way to secure SSN’s as well for use with the government, they know your SSN and you know your SSN, so when communicating over a possibly compromised channel you can authenticate not with your SSN, but with the hash of your SSN.

John Q. Public, 123-45-6789 -> 01a54629efb952287e554eb23ef69c52097a75aecc0e3a93ca0855ab6d7a31a0