TWSBI Fountain Pen

A few months ago while talking with a friend about technology the conversation turned to throwback items that we enjoy using. I brought up my fondness for fountain pens, which always seems to surprise people. The idea of a pen as a writing instrument goes back a really long time. Around the turn of the last century, there was an explosion in patents related to fountain pens and how they hold and dispense ink as you write. After my conversation with my friend, I was inspired to go shopping a little bit. I had some money that I set aside for small little gifts to myself that I had set aside over the past number of years. I never really touch it, so the money sits in my accounts. I came across a company that sells a highly regarded fountain pen, called TWSBI. As I got to browsing the options on Amazon, I looked at my Lamy branded Fountain Pen and realized that it was good as entry level pens go, but I wanted to move up a notch. TWSBI seemed a good option. The pen I selected was the TWSBI Diamond 580AL Silver Fountain Pen with the medium nib. I also got the “Broad Nib” as many reviewers expressed pleasure at writing with both.

580AL_1024x1024.png

TWSBI 580AL Fountain Pen

I have to say that writing with it is quite an experience. I started writing with fountain pens back in college and found that the way the ink flows beat any other sort of pen hands down. Plus the way the nib moves on good paper makes writing longhand a pleasure. It can still work on rough stock, but it struggles with the rough material, and there is more skritch-skritch-skritch while writing on some of the lowest class papers out there.

The Lamy I have uses a piston-convertible insertable tank, while the TWSBI has its piston tank built into the frame of the pen itself. I find that the TWSBI holds more ink, way more ink than my Lamy ever did.

Another little bit to note, fountain pens aren’t meant for left-hand writers as far as I know. The ink doesn’t dry fast enough for the way a lot of left-handed writers have to use a pen. Although I don’t have many folks I know that are left-handed writers, so there is no way to see if they could use it or not without making a mess of their hands with the ink.

If you have a little bit of spending money, this pen can go a long way in both its look and its function to add a little something to your workaday life. It won’t solve problems or anything like that, but it is something nice to have that a lot of people appreciate. I always chuckle to myself when people remark on how I use a fountain pen, and what I do for a living, which makes people think I should be keyboard bound. Sometimes old things peak, and iterations afterward are all downhill from that peak. In a lot of ways, just like Windows 2000. LOL.

Extracting Cisco Unity 10.5 Voicemails

In my work, I wear many hats. Amongst these is VOIP Manager. It’s not really a job, or a position really but fits neatly under the heading of IT Manager, which is my position title. I oversee the companies Cisco CallManager and Unity systems.

Occaisonally when coworkers of mine leave employment, they sometimes leave behind voicemails in their Unity mailbox. I’ve been searching for a long while to find a convenient method to extract these voicemails out of Unity and into any other format that could be easily moved around so that other people could listen to the recordings and get somewhere with them.

I’ve tried a lot of options, and endless Google searches. I eventually discovered a rather involved method to acquire these messages. This method is something that I would categorize as “bloody hell” because it involves a lot of questionable hacking in order to procure the audio files.

The hack begins with Cisco Disaster Recovery System, known as DRS. If you have a Unity and CallManager system set up, like I do, you probably have already established the DRS and have it pointed somewhere where your backups live. In my case, I have the DRS pointed to a share that lives on my primary file server. So that’s where you start. You have to make sure that DRS is running, and that it generated good backups. This method essentally backdoors the backup system to get at the recordings that Unity takes.

In my Unity folder, I have two days worth of backups, and the files you need specifically are 2018-02-19-20-00-07_CUXN01_drfComponent.xml, and 2018-02-19-20-00-07_CUXN01_CONNECTION_MESSAGES_UNITYMBXDB1_MESSAGES.tar. Your filenames may be slightly different depending on what you named your Unity system. When I found these files, I didn’t even think anything of the XML file, but the tar file attracted my notice. I attempted to copy this to my MacBook and once there, attempted to unpack it with bsdtar. It blew up. As it turns out, Cisco made a fundamental change to DRS after Unity 7, they started encrypting the tar files with a randomized key, derived from the Cluster Security Password. My cluster is very simple, just Unity and CM, and I suppose also Jabber, but Jabber is worthless and so I often times forget it exists. It wouldn’t be that they would use .tar.enc, no, just .tar, which confuses bystanders. That is pretty much the way of things as Cisco, I’ve grown to appreciate.

The next tool you need is from a site called ADHD Tech. Look for their DRS Backup Decrypter. Its a standalone app on Windows and you need it to scan and extract the unencrypted tar data.

The next utility you will need is the DRS Message Fisher. Download that as well. I will say that this app has some rough edges, and one of them is that you absolutely have to run it in Administrator mode, otherwise it won’t function properly.

Start the DRS Message Fisher, select the tar file that has your message archive in it, decrypted, and then you can sort by your users aliases. Click on the right one, then it will present you with a list of all the voicemails the user has in that backup set. You would imagine that selecting all the messages would extract all the voicemails in individual files, but that is not how this application behaves. My experience is that you really should extract one message at a time, because the app dumps its saving folder after every request and cannot understand multiple selections even though you can make multiple selections. It is also eight years old, so that it functions at all is a miracle.

You start at the top, click the first message and then “Extract Message Locally” which should open a window and show you the result WAV file you need. I learned that without Administrator mode, you never ever get that folder, it just opens up your Documents folder and does nothing constructive. In case you need help finding it, look for it here:

C:\Program Files (x86)\Cisco Systems\DRS Message Fisher\TEMPMsgs

With the app in Administrator mode, and a message selected, click the button mentioned above. This will open the TEMPMsgs folder and show you the WAV file. Click and drag this anywhere else to actually save it. Then advance to the next message and extract, and so on and so forth until you have all the messages extracted. There wont be any actual useful data in the filename, it’s just a UUID, so I suppose we should be happy we are getting the audio and count our blessings.

Once you have all the WAV files you need, then you can dump the voicemail account and move on.

What a mess. I look at Cisco and marvel at the configurability of both Call Manager and Unity, but watch it trip embarrassingly hard on things like this. Apparently nobody ever cared that much to address voicemail survivability and extraction. As far as I know, this overwrought and wretched procedure is required to meet that particular need. It goes without saying, this is wholly and completely unsupported by Cisco TAC, so save yourself the headache of running full-speed into that bulkhead.

In many ways, this solution, if you can call it that, is getting voicemail survivability by dumpster diving. Ah well, it’s Cisco, complaining is very much like going down to the river and screaming at it to change course. You’d probably get further with the river.

Giving Chrome Some Pep

I’ve been using Google Chrome on my Macbook Pro for a long while, and I’ve noticed that some websites take some time to get moving along. In some ways, it feels like the browser is panting and trying to catch its breath. So today, while trying to solve a work problem I accidentally stumbled over a neat way to give my Chrome browser a little bit of a boost in performance. It seems to benefit when I use sites that are interactive, like my work help desk site or PNC online banking for example.

The trick is, create a small RAM drive on the system, and then copy the Chrome profile over, link to that profile so Chrome can find it, and then start to use Chrome. As Chrome works, things like settings and cache data go to RAM instead of the HD on my MacBook Pro. Then I use rsync to copy data into a backup folder just in case my MacBook pro suffers a kernel panic or something else that would accidentally dump the RAM drive.

There are a few pieces to this, mostly from scripts I copied off the network.

I copied the script called mount-tmp.sh and made only a few small adjustments. Specifically changed the maximum RAM drive size to 512MB.

Then I created two different bash scripts to check-in the profile to the RAM drive and then to check-out the profile from the RAM drive back to the HD. Since I wrote them from scratch, here they are:

check-in.sh


#!/bin/bash
/Users/andy/mount_tmp.sh
mv /Users/andy/Library/Application\ Support/Google/Chrome/Default ~/tmp
ln -s /Users/andy/tmp/Default /Users/andy/Library/Application\ Support/Google/Chrome/Default
rsync -avp -delete /Users/andy/tmp/Default /Users/andy/Library/Application\ Support/Google/Chrome/Default_BACKUP
echo “Complete.”

check-out.sh


#!/bin/bash
rsync -avp -delete /Users/andy/tmp/Default /Users/andy/Library/Application\ Support/Google/Chrome/Default_BACKUP
rm /Users/andy/Library/Application\ Support/Google/Chrome/Default
mv /Users/andy/tmp/Default /Users/andy/Library/Application\ Support/Google/Chrome
/Users/andy/mount_tmp.sh umount
echo “Complete.”

If you give this a shot as well, I would love to hear from you about your experiences with this little speed improvement hack! Hope you enjoy!

Moment of Geek: Raspberry Pi as Thermal Canary

A few days ago I had run into a problem at work. The small Mitsubishi Air Conditioner had decided to take a cooling nap in the middle of the day. So my office, which is also the machine room at work was up around 85 degrees Fahrenheit. I was used to this sort of thing, summers bringing primary cooling systems to their knees, but this time I had a huge A/C unit in the ceiling that I elected not to have removed and left in place, just in case. So I turned it on, set it’s thermal controller to 70 degrees and the room temperature tumbled in about ten minutes. Right after the room temperature was normal, and I had service out to visit me about my little wall-mounted A/C unit, the damn thing started functioning normally again. The tables turned on IT, where for our users, this is what happens to them. They can sit there and struggle, and then we arrive and the machines behave themselves like nothing at all was wrong.

So I had the big A/C, and it’s smaller wall-mounted unit both running overnight and faced a problem. I want to know what the temperature is in my machine room without having to buy a TempPageR device. I had one long ago, and it was rather expensive. I looked on my desk and noticed my Raspberry Pi, just sitting there, doing nothing of consequence. I did a brief cursory search on Google, and I knew the Raspberry Pi had a CPU Temperature interface hidden somewhere, and I was happily surprised to find a website detailing how to use this exact feature in Python programming language to write a temperature log, and optionally graph it. It was mostly copypasta, adapting things I had found online pretty much by copy and paste and hammering them here and there to work. I have programming skills, but they are rather dated and rusty. Plus I’ve never used Python, specifically. So my first effort was successful, I got a 1-second temperature logger in place. I was rather happily satisfied with my efforts, but I knew I would not be happy with Celsius, but I knew the temperature was colored by the CPU in the Raspberry Pi itself, so the reported temperature was quite higher than the room temperature.
I started to tinker. First searching for the equation to convert C into F. So I got it, 115 degrees. When I turned on the big A/C device, and its thermal controller displayed the ambient room temperature in F, 74. So I did some math and subtracted a constant 44 degrees from the CPU temperature, which “calibrated” the CPU temperature to be a rough approximation to the room temperature. Some eagle-eyed readers may notice that my math is off, but after I had moved the Pi over to the server stack, I had to adjust for a higher CPU temperature because of it being further away from the wall A/C unit. So now I had a 1-second temperature logger. I turned on graphing, and the entire program crashed and burned, I wasn’t running the application in an X-Windows environment, so I tore the graphing library and code out because I was never going to use the graphing feature anyways.

That, of course, was not enough to replace the TempPageR device. I needed some alarm system to alert me to what was going on. I thought of some interfaces, email, SMS, iMessage, email-to-telephone-call cleverness and each thought brought me against different versions of the cliffs of insanity. I could have probably smashed and hacked my way to a solution involving some ghastly labyrinth of security settings, passwords hashed with special algorithms that are only available on ENIAC computer simulators that only run on virtualized Intel 8086 processors with the Slovenian language pack loaded and using the Cyrillic character set; An arrangement that was an epic pain in the ass. So earlier in the day, I had tripped over an app advertisement for Slack so that it could use incoming data from the Pingometer website. I have a Pingometer account, a free one because I’m a cheap bastard. The single pinger externally checks my fiber optic connection at work, keeping AT&T on their toes when it comes to outages. The Pingometer website uses incoming Slack webhooks. An incoming Slack webhook comes from some source that makes a really simple web browser call using HTTP. It wraps JSON into HTTP and sends the request to Slacks servers. Slack then does everything needed to make sure the message is pretty and ends up on the right Slack channel, on the right team; this was my alert mechanism.

So I did another Google search, found the intersection between Linux, Python, and Slack and some more copypasta and some tinkering and I had a Python app that displayed the room temperature in Degrees F, and made my Slack a noisy mess, as it was sending incoming webhook requests every second. One more tweak, which was a super-simple IF-THEN block, set my high-temperature mark at 90 degrees F and let it go.

 

There is something satisfying about being able to hack something together, cobble it actually, and have it work without blowing up on the terminal, blowing up Slack, or otherwise failing. So now I have a $35 Raspberry Pi running as a rough temperature alarm, it’ll send alerts to Slack and let me and my System Admin know at the same time over Slack. I’m quite happy with how it all worked out. No obnoxious email settings, ports, security frameworks, awkward and obtuse hashing routines, just a single JSON-formatted HTTP call and BAM. All set. An alarm, with a date and time stamp and a temperature, delivered right onto my iPhone with automatic notifications from Slack, so it wakes me up if I need it.

So anyways, without further ado, here is the code:


from gpiozero import CPUTemperature
from time import sleep, strftime, time
import json
import requests

# Set the webhook_url to the one provided by Slack when you create the webhook a
t https://my.slack.com/services/new/incoming-webhook/
webhook_url = ‘https://hooks.slack.com/services/####/#####’

cpu = CPUTemperature()

def write_temp(temp):
with open(“cpu_temp.csv”, “a”) as log:
log.write(“{0},{1}\n”.format(strftime(“%Y-%m-%d %H:%M:%S”),str(temp)))
if temp > 90:
slack_data = {‘text’: “{0},{1}\n”.format(strftime(“%Y-%m-%d %H:%M:%S”
),str(temp))}
response = requests.post(
webhook_url, data=json.dumps(slack_data),
headers={‘Content-Type’: ‘application/json’}
)
if response.status_code != 200:
raise ValueError(
‘Request to slack returned an error %s, the response is:\n%s’
% (response.status_code, response.text)
)

while True:
temp = cpu.temperature
temp = (9.0/5.0 * temp + 32) – 44
write_temp(temp)
sleep(1)


It has been forever since I’ve needed to program anything. Once I was done, and I saw it work the way I wanted it to, I was quite happy with myself. I haven’t felt this particular sense of accomplishment since my college years. It was quite a welcome feeling.

Blogo Test

One of the biggest headaches with my WordPress blog is remembering to write new posts with any frequency. It sometimes comes down to a test of many editors, which one do I like, and how smooth is the learning curve to upload my story to my blog post? Email is a pain, mostly because the instrumentation to add all the extra bits is rather annoying and I don’t really want to revisit and markup a blog entry after I’ve written it. Then after that, I looked at BBEdit, which is the big-brother to TextWrangler. The folks who wrote that application provided a free trial of BBEdit, and gamely informed me that TextWrangler was a dead duck. I never really got engaged with BBEdit enough to think of it as a source for blogging, and TextWrangler is still pretty good for what I need.

Since I’ve had this blog for a long while, and I feel a little bad about neglecting it, perhaps it’s time to dust off the old blogging tools and hop back into it. Facebook is the Walmart of Social Media, it’s everywhere you look, but you feel dirty using it because you know every time you walk in you’re being measured and indexed and categorized.

Facebook, like Walmart, brings a ready audience to the party, people who are happy enough to just waddle through and notice what you have to write and maybe drop a like down. Blogging has always been longer-form than Facebook, and way longer than Twitter. Plus since I host my own blog on my own domain, I can write things here that I wouldn’t or can’t write in other places.

So this test will see how well this little app called Blogo works on my MacBook Pro. If it’s good, we’ll likely have more stories appear in the blog in the future.

DayOne 2.0 to Evernote Migration

Years ago I started to write a personal journal. Mostly the application I used was Microsoft Word, sometimes other text editors, and I’ve always been on the search for a better way to conduct my journaling habit. When I started using Apple’s Mac, I discovered Bloom Built’s DayOne journaling software. DayOne came highly recommended on many pro-Mac websites, so I bought in. Everything was going fine with journaling for a while, and then I noticed that the authors of DayOne were going to release DayOne 2.0. I eagerly jumped onboard with this update and forged ahead. The feature set was welcome, multiple journals, a more refined syncing experience with an online service run by the manufacturer (turned out to be AWS), and I was rather happy. The new syncing system also came with extensions for one of my favorite online time-savers, IFTTT. What more could I ask for?

Then the updates started. I noticed in my Mac App Store application that there was a listed update for DayOne 2.0, and I clicked on it, and the system acted as if there was no update present. A deviation from the expected behavior that I thought was some glitch or bug. So I dived into the problem and searched Google for hints. There were so many options in the Google Index that I figured one of them should solve my problem. In the end, I had completely roto-rootered the entire Mac App Store. I tried one last time, and still, DayOne 2.0 refused to update. I gave up, figuring that was something that maybe a reinstallation of the Operating System would solve because it was a behavior that was unexpected and this sort of thing doesn’t happen with Apple products in my common experience. So then, resistant to being beaten by a bug I forced the issue with the App Store and tried to download the update to DayOne 2.0. I discovered to my chagrin that the update required the next edition of the Mac OS, El Capitan. I have a vested interest in staying with Yosemite; I’m happy with my MacBook using Yosemite, so why should I upgrade the OS to satisfy an application?

The next injury came shortly after that. While using DayOne 2.0, I was rather miserable since the software acted so downright sluggish. I would type, and the application would just pinwheel or pause, and then in a blur, all my words would spill into the display with the same rate at which I type. I wasn’t getting the instant response to keyboard actions that I was expecting. I verified that other applications behaved properly, TextWrangler, for example, behaves perfectly fine to my expectations, so this isn’t a system problem, it’s a DayOne 2.0 problem. Previously to this, I had sprung for a copy of Little Snitch on my Mac to help me better control my network interfaces. Little Snitch had options to block an application from accessing the network. So on a lark, I figured I would test the sluggish application, DayOne 2.0 by blocking its network access with Little Snitch. It was like turning a lightswitch! The sync component was broken, showing a red exclamation mark, but man, text entry was back to normal, and tag entry was super quick as well. I didn’t have to wait for pinwheel after pinwheel to get where I was going. I wanted to journal, to get my text entered into the system for safekeeping and remembering. So for a while, I would use Little Snitch to damage DayOne 2.0 so I could use the application the way I wanted to, the way I expected to. I then wrote to Bloom Built and asked them if they would update the application for the users who didn’t want to march forward with El Capitan or Sierra, and declined. It was a longshot, but I figured it was in their best interest to address their application to the largest group of users, and that would presumably mean even people using Yosemite. It wasn’t to be.

So then after thinking about it for a while, and growing weary of the rather extended procedure to get Little Snitch to help me block DayOne 2.0’s broken sync routines, I made the fateful decision to switch my journaling to Evernote. Why Evernote? Because it was on all my devices, just like DayOne 2.0 (at least Mac devices), and Evernote already had integrations with IFTTT, so that was set. Evernote was something I knew, and the Evernote syncing routines were significantly better than DayOne’s syncing routines. Saying that has to be tempered by the fact that sometimes Evernote’s syncing routines also break, but the one-off hard-to-diagnose sync error is better than a broken sync routine that throws pinwheels when you type text or try to enter tags, as it is with DayOne 2.0. Evernote also has one extra feature, which wasn’t a part of the decision but now that I’ve made the switch, I’m glad for, and that is you can highlight text in Evernote and encrypt it using AES. This is something that DayOne 2.0 had as a promise, but they were by all appearances dragging their heels when it came to journal security.

I then started writing all my new journal entries in Evernote. That was very straightforward. However I left about 11,000 entries behind in DayOne 2.0. I started looking at the ways to get that data out of DayOne 2.0. There are a few options, the creation of text data, PDF data, HTML data, or JSON data. So I started extracting entries out of my DayOne 2.0 journal trying to import them into Evernote. What I wanted was individual entries to move over to Evernote and be individual entries there as well. Everything that comes out of the exporter in DayOne 2.0 comes out as chunks. One big HTML file, one big PDF file, one big JSON file, and one big Text file. There is no easy way to get individual entries out one-at-a-time unless you wanted to manually slog through every single entry. At 11,000 entries, that wasn’t going to happen. I have no patience for that. So then I started to look at ways to hack my DayOne 2.0 exports, since the people that wrote DayOne 2.0 didn’t have anything helpful, and all the other tools I found online were solely written for DayOne 1.0, something I couldn’t use. I didn’t have a Journal.dayone file, I had an AWS hosted JSON chunk. So the hackathon commenced. HTML was a giant headache, since there isn’t any way to easily split HTML up into chunks, syntactically speaking, at least not with the data that DayOne 2.0 exports. The PDF was a mess, one immense PDF and the text was in 8-point, it’d be fine if I was 20 years old, and didn’t mind slogging through a monolithic PDF file for a date. I even tried to hack around JSON in my limited way. I got JSON out to CSV but then realized that my instinct to make the CSV a data source for a mail-merge and mail-merge my journal out to individual entries was going to be a bust. Macs don’t do mail merge at all. I made peace with that a long while ago, not that I ever had any work that needed mail merge. So there was only one format left, the most basic format, text.

DayOne 2.0 spits out a journal into one monolithic text export file. So I have to figure out how to hack this text file up into pieces. I spent a long while with the bash terminal, screwing around with csplit and discovering the subtle differences between Apple’s implementation of csplit and GNU’s implementation of csplit. After a night of blind hacking, I gave up on csplit. Of course, by this time I had also given up on DayOne 2.0, it wasn’t the application I wanted anymore. My feelings had soured against the manufacturer, for only going so far with their export code and leaving the rest for me to hack out on my own. I was irritated and felt gypped that they didn’t just go one step further and include an “export individual entries” checkbox somewhere. But I got over my funk; I burned that bridge there was no reason to keep on complaining about it. I was moving to Evernote and Bloom Built was pretty much post-fire, all sad ashes soaked with water. Nights of searching and hacking on this monolithic text file and I eventually found the solution. The first step comes with Perl:

#!/usr/bin/perl

undef $/;
$_ = <>;
$n = 0;

for $match (split(/Date:\t/)) {
open(O, ‘>temp’ . ++$n);
print O $match;
close(O);
}

This little script is something I found through Google. I’m far too lazy to hack this out on my own if I’m brutally honest. The keyword in DayOne 2.0 entries in this monolithic text file is “Date:” followed by a tab character. Every entry starts with this key. So, export my DayOne 2.0 journal to Journal.txt, and then run this script against it: ./split.pl Journal.txt. Perl tears the file into perfect chunks ready for action. But the files are temp001, temp002, temp003, so on and so forth. Two lines then add the last crowning bits to each file. The first tacks on a txt extension and the second one grabs the first line of each file and makes that line the new filename. In DayOne 2.0, the first line is the date line. So now my entries have their dates as their filenames. This is just a compromise, I would have much preferred to have the dates preserved in the file metadata, but hey, you get what you get:

for f in temp*; do mv $f $f.txt;done
for f in temp*; do mv $f “$(head -n 1 $f).txt”;done

So for my test journal, I exported from DayOne 2.0 into Text, chopped it all up using Perl, and used the bash shell to hack the names to where I was happy. Then lasso the entire batch of files and drag them into Evernote. Once I had this done for all my journals, I closed DayOne 2.0 and left it alone. There is no point in trashing it, let it dwell on in a ghostly non-life for all I care. Evernote at least has proper behavior when it comes to text entry, tag entry, and the syncing routines are better. Plus Evernote will never abandon me the way Bloom Built did. They’ll never stop updating Evernote for Yosemite, or if they do, it’ll be so far down the pike that I get a new laptop and all of this is just so much foolish wrangling anyways.

In the end, I won. I won against an annoying choice made by a company I used to love; I won against a file format that seems so dumb, and I was able to shoehorn years of journaling without having to abandon my past or make it so annoyingly inaccessible that it would be the same as abandoning it.

If you find an interest in switching from DayOne 2.0 to Evernote, this is one way to do it. There may be better ways, clever ways to convert the JSON to the Evernote import file format, perhaps. But I didn’t care enough to slog through JSON, this is my way, in all its dumb glory. Of course, my tags in DayOne 2.0 are shot, and the tagging in Evernote is a manual affair, so that was another little compromise. Perhaps if I have dull weekends or evenings, I can hack out the tags over time. Having the entries and losing the tags is an acceptable loss. At least I no longer need to force Little Snitch to break DayOne 2.0 so I can use it. Heh, that’s still something that makes me shake my head in disbelief. That you have to do it this way is such a mess.

New Year Resolutions

This new year I resolved to be done with Twitter, Facebook, and Reddit. I had abandoned Twitter a long time ago, Reddit was easy as I was never really invested in that platform anyways, and then most recently leaving Facebook behind.

It needs a little characterization. I haven’t deleted my Facebook account, but what I have done is ceased to engage on that platform. I still do check in once a week just to mop up my timeline recommendations from people putting my name on their posts and otherwise just establishing a heartbeat there so that the people who are on the service and follow me notice that I still live. I suppose that eventually even bothering with the heartbeat updates will grow tiresome and I’ll give up on that as well.

I have instead moved my entire social networking existence to a new service called Imzy. It’s at imzy.com, and I encourage everyone to join me there. There are some pretty good AUP rules in place and communities can also have extended rules, building off the core AUP of the site itself. Imzy is a perfect place to have real discussions with people online. There is a culture in Imzy which I haven’t found anywhere else. It’s this lack of trolling that I witnessed and it’s what led me to dump Facebook.

I don’t know what this means for this blog. Imzy is a great platform all on its own, and when it comes to blogging, my user community has a lot of features that my blog can’t meet. The sense of community I think is what is missing from a lot of services, and my blog. This service is mostly just a billboard for me to yell at the darkness. There aren’t any real conversations going on here, unlike in Imzy.

I figure if I don’t post more blog entries I may just archive all of this stuff and shutter the service completely. Then again, I may just be lazy and let this blog long-tail it to eternity. Only time will tell.

Assert The Win

Sometimes it’s the best thing to assert you win and walk away from a toxic problem. So far today I’ve done that quite a bit. What have I abandoned?

I’ve walked away from Facebook. It’s been four days since I even logged into Facebook and since then I haven’t missed it. I’ve been catching up on my news; the Spiceworks Community board consumes a lot of time. Then after that, I turned my attention to my Pocket list. There just isn’t enough time anymore to deal with Facebook. When I logged into it, I had eighteen notifications, and I frowned and realized that I didn’t care that much. I’m writing a lot of my thoughts into my journal after coming to the realization that sharing with others isn’t going to be a positive experience. Now nearly everything on Facebook is an unpleasant experience. So, abandoning toxic things seems to be a good thing for me.

Another toxic system is Office365. Microsoft and I go back for a long while, right along with my almost palpable hate for that company and their products. Going into just how Office365 lets me down is very dull. Nearly every interaction has me wishing I could just close my laptop, put it in my backpack and run away from my life. Everything that has some Microsoft technology associated with it has me frowning in deep disappointment. Alas, there is no way to escape the Great Beast of Redmond, so we gnash our teeth and endure the horrors.

The final horror is WordPress itself. I use a stock theme, Twenty-Twelve. It’s not a custom theme. It’s not slick or responsive. It’s just a dumb theme. So while reading my blog, I realized just how much I wanted to change the line-spacing for my post entries. This is where my expectations fork, there is an Apple fork and an “Everything Else” fork. The Apple fork has been proven time and time again, that the answer is simple and shallow and easy to get to, understand what the change will do, and make it work. Then there is everything else. Here we have WordPress itself. I wanted to change the line-spacing on my theme. So I go to the Dashboard, and I spend ten minutes blindly stabbing at possible places where this option might be hiding to no effect. Then I do a Google search, which is the first and last place that most possible solutions are born and die. A good Google search almost always results in the answer you are after. So, “WordPress vertical line spacing” led to a place that eventually had the solution in it, but the theme didn’t match what I was expecting. This is the core of frustration, so I modified the search to include the themes name itself, and that helped. I found the setting, and it was in a CSS stylesheet file. I left the WWW when it was still HTML only. CSS irritates me. But anyways, hack CSS, that’s the answer. It’s a dumb answer, but that’s it. So I find about 130 places where line-height is an option. I laugh bitterly at the number. Which section to edit? Are you sure? So I gave it a shot. I set the line-height to 2.0 and then looked at my site. I can’t tell if it improved or not. But the most adaptive solution is to assert it did what I wanted. Mark the win as a notch and move on. Do I care? Well, I wanted to do something. I did something. Did it work? Probably not.

But then we get back to that first fork. That’s why I love Apple so much. Nearly everything they touch MAKES SENSE. I don’t have to struggle with some labyrinthine mystery. Maybe my edits will work, maybe they will break whatever it is, maybe it won’t matter. Maybe any setting I change will be overridden somewhere else, by something that was never documented. That’s the core design principle of both WordPress and Microsoft. I suppose we should just be happy that the most basic functions work. Much like the Internet itself, the fact that any of this works is a daily miracle.

So instead of writing a huge rant, one that nobody wants to read and nobody cares about I will assert that I won, psychologically move forward and be able to forget the conditions that led me to those particular experiences. The blog doesn’t work like you want? Don’t go there. Facebook a cesspool of ugly humanity? Skip it. Microsoft? Ah, if only it would burn to the ground. But we can’t have what we wish, even if we’d do anything for our dreams to come true.

So! Hooray! A Win! Facebook, WordPress, Office365! Just stop worrying about the bomb. It’s “Someone Else’s Problem®”

Sample Malware

Today I received a sample email that some of my coworkers caught. They asked me to look into it. The email link led to a bit.ly link, which I was able to extract and through a clever little trick, appending the bit.ly link with a + character doesn’t load the site that the bit.ly link goes to but tells you about the link. This link has been clicked on about 7000 times. Already I know we’re dealing with malware, so now it’s not a question of if it’s a rabbit hole, but rather, how deep does it go?

I pulled the bit.ly link contents out and handed it to curl on the terminal in my Macbook Pro. I don’t expect curl to do anything but show me the text of where this bit.ly link goes. It heads to a PHP file on a presumably hacked web-server or blog. The PHP itself is a HTTP refresh-redirect to a Dropbox hosted file. So I opened up my Virus Lab VM and followed where this led. The Dropbox content said it was a 1MB PDF file, but when I opened that, it led to a phishing attempt.

The phishing hack had an obnoxious URL attached to it, so I pulled that out and discovered it was encoded in base64 format. I decoded that text chunk online, and it revealed a Javascript script-block formed by a single call to document.write(unescape()) function.

Whoever it was, went to a long length to obfuscate their malware. Ultimately it led nowhere because we caught it. I find this sort of thing fascinating to pull apart, like an easy little puzzle to unravel. The phishing attempt is for email username and password, and if someone falls for that, then thanks to people being usually lazy with passwords, once you have one password, chances are you have all of them on every other site.

Just another reason to use a password manager and have individual passwords per individual sites. If one breaches, then the damage is limited to that one site, not all of them.