Sunday, March 1, 2015

Review: Pockethernet, "The Swiss army knife of network administrators"

Being a systems & network administrator, sometimes I get to deal with making, crimping, repairing, or otherwise generally troubleshooting cables, switches, and basic networking functions. Generally one might use a Fluke or other network tester in a situation where you might need to test any of the above - for example, a user calls in and says they need a long Ethernet cable made and you don't have any non-bulk cable handy. Unless you do this a lot, a Fluke isn't something that you're going to just have sitting around and so you might try visually looking at a cable or plugging it in to network hardware you have laying around to see if the cable works. This process can be cumbersome, inefficient, and take far more time than it otherwise should.

Well, not anymore. The Pockethernet, a network testing device stemming from an Indiegogo funding campaign early in 2014, is attempting to change all of that. Brought to life by German duo Zoltan Devai and Jeroen van Boxtel, the device is aiming to be an all-in-one device to make every IT person's life easier. I received mine in the mail a few days ago and have been playing with it to develop some first impressions since then. Past the video, I get into the unpacking and the Pockethernet's uses!

The original Indiegogo campaign closed in March of 2014 with 370% of the original $50,000 goal, and there have been several delays since the devices were supposed to have been shipped. This is not the end of the world, and I only had thoughts that I might not get the device a couple of times. There have been over 20 updates sent out to backers between the campaign closing and now, keeping us up-to-date with the status of production, shipping and several challenges that were encountered along the way. Producing the devices took longer than expected with a couple revisions and retooling required, regulations taking longer than expected to be sorted out for all the governing bodies worldwide, and even running into difficulties dealing with UPS/DHL.

All of that is now nearing completion, and devices have been and are being shipped. After being unpacked, this is the case that the unit and cables came in.
Pockethernet in case
In addition to the case above, two documents were included: a welcome document, and a manual about the device itself. The first page of the information document is dedicated to regulatory information (legally-required text, yadda yadda yadda). and the rest of it talks about the device itself.
Pockethernet case opened
The Pockethernet comes with a dual-function dongle so that you can either test to find the wire map of a cable or turn what you have into a loopback cable, an Ethernet cable, and a short USB cable for charging.
Front of Pockethernet 
The device itself is contained in a machined metal case, and feels incredibly sturdy. The front and back of the device are closed with clear plastic so that you can see through to the other side. The PCB is the main component inside the case, and the 750mAh battery is situated just on top of it.
On this front side are the Ethernet jack itself and 5 LEDs:
Pockethernet - Front
Pockethernet - Rear
Rear of Pockethernet
It appears that the logo and regulatory information printed onto the outside of the case is reversed from where it sound be. When seated so that the Pockethernet name and logo are visible as such in the photos above, the PCB is seated near the top of the case with battery hanging below it. When flipped to match the documents in the manual, the regulation information is right-side up - and who wants to see that?
Wiremap/Loopback dongle
This little dongle comes with the Pockethernet; there is not yet any documentation to go along with it, so the "Loopback" and "Wiremap" labels on it are all there is to go on. Those are fairly self-explanatory, however.
Pockethernet start screen
When starting up the Pockethernet Android application this is the first and main screen that appears. Down the left column are icons for the 4 cable pairs, Power over Ethernet, link, and DHCP/network information. Selecting Connect at the bottom of the screen starts the connection between the phone and the Pockethernet. To do a regular cable test on a disconnected Ethernet cable, the Wiremap portion of the dongle lets you test to make sure that the cable generally works.
Wiremap test function
Once connected to the cable that you want to measure or test, selecting the Refresh button at the bottom is what gets the test to actually run. The screen above shows the cables and the which pins are connected where. This test took place with a regular straight-through Ethernet cable, so the wiremap function served to simply reflect the signal back to the Pockethernet from the opposite end. Per the backer forums, the O S and C listed next to each pair appear to stand for Open, Shielded, and Closed in relation to the electric circuits.
Wiremap TDR results
Time-Domain Reflectometry (TDR) is used to find the length of the cable that you have connected. In this case, I tested with a 2-meter cable. The reporting is wonky and shows different lengths for the pairs, which is a known issue. After a few weeks, the developers hope to release an updated version of the application with fixes for bugs reported in that timeframe. Given this, the results appear to properly show that the cable is not "connected" to another device (Wiremap function, not Loopback) but is returning results.
Cable gigabit information
This screen isn't the most helpful, but it appears to show that all four pairs of cables are connected and which is the transmit or receiving side.
Online network test
One of the neat features of the device is that you can do an Online test with it, meaning you can make it connect to your network, receive a static or DHCP address, and ping up to three addresses. A quirk of the device appears to be that you have to run a test on the cable first before you are able to run the Online test. Each test to verify connectivity with the server(s) that you specify takes approximately 3 seconds so you can get enough information to know what is working and what might not be.
Offline network testing
If you have a cable toner, the top half of the Offline testing screen is useful to you. Like any other tone test, you can try to use this to find a cable that you're looking for in a patch panel or other scenario when otherwise finding it would be difficult. The bottom half lets you test the Bit Error Rate (BER) of a cable using the Loopback side of the dongle. Testing at 1000Mbps is currently broken (another known issue) but 10/100Mbps works just fine.
Generate report screen
A neat feature built into the software is that you can send a PDF of the latest test result that you have generated to an email address in order to export some data. It piggybacks off of whatever mail program you have set up on your phone, but it gets the job done. The report itself is basic and includes all the information shown above minus the graphics, so it isn't incredibly useful yet. Styling and image work will go a long way to make the report feature more useful. Yes, that is a button in the bottom-right that lets you attach a photo to the generated report, perhaps of the drop that you are testing on or other verification of the job you've just completed.
Generated PDF of test results
The report, while nifty, isn't incredibly useful yet. The wiremap and TDR information, as shown above, appears to be inaccurate/incorrect displaying Short/Open when Closed would be the proper information to have been included and a > 0m length result. Again, this is a software bug that can and hopefully will be be fixed with an app update.
Finally, the software has a couple basic options that you can change as well as lets you see the device and software information. There isn't all that much here yet, but I expect more to come in future software updates.

While playing with the device for a short while I discovered and read about a few odd things it does, some of which can be worked around.
  • There isn't yet a quickstart or how-to guide for using the product, although there is an active backer forum where questions are being answered very quickly. If you generally know something about networking and what you are doing, then it probably will not take you long to get up to speed and make use of the device.
  • The Pockethernet cannot be used while connected to its USB charger.
  • Before you can start an Online test, you are required to run a generic cable test, presumably to let the software know that it's connected to something.
  • The cable length results are sometimes correct, but not in other cases. This is a known issue which should be fixed shortly.
  • Testing the BER at gigabit speeds currently does not work, although it does at 10 and 100Mbps. As before, this is another known issue reported on the forums.
So...does it really do that?
The device as delivered is a solid piece of hardware that accomplishes many of the goals it set out to accomplish, but the software is not completely there yet. For something that I put money down before there was even a physical product, it holds great promise through continued software updates to fix bugs and add additional functionality. One item specifically mentioned that will be added is LLDP/CDP support, and other features to be added include VLAN support, SNMP/web configuration, and 802.x support.

The Pockethernet is still in its early stages, but looks to only have a bright future ahead with continued software updates. At $150 (not yet generally available for purchase) or even slightly more, it is a steal compared to the larger and more-expensive network tools for basic network testing and will continue to chip away at the higher-end tools as more functionality is added. Having only possessed the device for around a week I have already had to use it several times to verify cables both at home and at work, so it is already paying for itself. If you can steer around the various bugs that exist currently, I would definitely suggest looking to purchase the Pockethernet when it becomes available. Otherwise, wait a few months for those to be ironed out and new features added, and then purchase yourself a Pockethernet. I can certainly see myself never needing another network testing tool as long as I have my Pockethernet by my side. 

Friday, February 27, 2015

Metrorail Fares Through the Years

How much would you pay for a trip on the Washington Metro Area Transit Authority (WMATA) Metro? How much are you willing to pay for a trip on Metrorail, and how fair is that charge? As the 2015 year continues and discussions for the WMATA 2016 fiscal year start up, there have been preliminary discussions about raising the base fares for both rail and bus, although final budget votes won't happen until later in May. With this new upcoming potential round of fare increases in the works (even if the fares don't go up, local jurisdictions are being asked to pay more than last year), it is important to know some of the history of Metro fare increases since inception of the system in order to be able to judge current and future actions of the system.

Metrorail Fares
Since the beginning of the Metrorail system in 1976, the transit agency has used a distance-based fare - that is, if you ride further on the system, then you will pay more. The current fare structure includes a base fare and two zones, with base prices varying whether you enter the system during rush or off-peak hours. As the thinking goes, if you take a longer ride on the system then you are using more of its resources, and thus need to pay more to help maintain the system. This has generally worked over the years to keep fares as the largest percentage of income for the system and Metrorail as the second-largest system in the US, however there has always been significant funding required from the DC/MD/VA governments in order to keep the system fully funded. Fares have not gone up every year or even every other year on a regular basis, which got me interested in taking a look at the longer-term Metrorail fare history.

The first thing of interest is the cost of a Metrorail fare over time. There are two numbers which we can generally rely on since the inception of the system: minimum boarding charge, and maximum fare. Per the fare calculation that Metro uses when you swipe into the system, the boarding charge is the minimum fare that you are charged in order to get through the gate. This has always been the case since 1976, and continues to this day. The graph below is a chart of several pieces of information: a) max possible trip distance, b) max peak fare, c) off-peak max fare, d) peak boarding charge, and e) off-peak boarding charge. This is a lot of information, so just bear with me!

a) max possible trip distance: My understanding of this is that it is the maximum possible trip distance in the system on a line taking the composite "as the crow flies" measurements between each on said line. I'm not 100% sure this interpretation is correct, however. That said, in the graph below it represents the maximum distance you could travel on a line on Metro. This distance hasn't increased much since the work done in the mid-90's on the Red, Blue, and Green lines.
b) max peak fare: The maximum amount of money you are allowed to pay when riding the system during peak hours. This is denoted here in orange.
c) off-peak max fare: the same as the peak fare, except the maximum you can pay during off-peak hours
d) peak boarding charge: The minimum amount it will cost you to step into the Metrorail system during peak hours
e) off-peak boarding charge: The minimum amount it will cost you to step into the Metrorail system during off-peak hours

Minimum and maximum potential costs of riding Metro
A line was superimposed over the three sets of most interesting data - max peak fare, the peak/off-peak boarding charges. The off-peak max fare wasn't graphed as this lower fee was essentially "created" during the 2000's to let off-peak trips cost less. As the graph shows, all three sets of data are fairly regular and conform to a linear increase, though the max peak fare increases at a higher rate than the boarding charges. If we took just this data, it would appear to show that since the beginning of the Metrorail system there have been (somewhat)-regular fare increases and those coming this year would be no different.

Wait, what? Fare increases over time might be....normal?

Why apparently, yes! Just as milk, gas, bananas, and other products go up in price over time, the same appears to happen to a trip on the Metro system, thanks in no small part to this little thing called inflation. When requirements for living go up (food, gas, etc.), so do salaries, equipment costs, and a myriad of things Metro needs to function, and thus some of those charges are rolled into a Metro ticket. This is usually ok as wages increase over time, so patrons have more money to spend and thus available money to spend on the system.

Now you might say that yes inflation exists and things cost more because of it, but your cost in the Metrorail system has gone up significantly more over time and has not tracked with inflation. This, however, is where my thinking was previously incorrect and appears to be much more fair to the consumer than previously thought.

This second chart shows three sets of data: ratio in change of the peak boarding charge, ratio in change of the off-peak boarding charge, and Y/Y change in national US inflation. Lines were plotted to match all three, and (unsurprisingly) the peak/off-peak boarding charges (comparing a fare cost to that of the previous change) track very closely to each other at approximately a 7% increase in cost per fare increase. Inflation, since 1976 when the system opened, has actually slightly decreased at 10% year-over-year (i.e. 9.10% in 1975, 5.40% in 1990, and 1.60% in 2014). However, overall yearly inflation since 1975 itself clocks in at just over 4% (double-digit inflation of the late 1970's tied with the 1979 energy crisis did not help at all). Plotted over time, this yearly 4% increase in inflation ends up being more than the 7% fare-per-fare increase as applied to the Metrorail system:

Yearly changes in metro cost vs inflation
Given this publicly-available data and due to how the math appears to work out, adjusted for inflation, the minimum peak fare of $0.55 in 1976 would cost approximately $2.29 today (a ~315% increase!), whereas the actual is now $2.15. Fares have certainly increased every few years, but the data do not show that fares have risen any higher than than inflation but in fact slightly lower that inflation.

The results and the last graph are a bit confusing, but dipping back into derivatives every once in a while is certainly nothing short of a fun adventure. In the end, the rate of change of Metrorail fare increases does not keep up with the rate of change of inflation. All things being equal, the average Metrorail rider actually gets more out of the system now than an average rider back in 1976 would. This is especially interesting now that the system is almost 40 years older and significantly larger and covering significantly more area. The maximum peak fare that used to only be worth 14.7 miles is now one that is worth 15.7 (a meager increase), while the maximum possible trip distance has grown significantly from 17.30 miles to 29.60. Running the Metrorail system has grown more costly over time as it expanded, but customers of the system have been able to take advantage of cheaper fares compared to what has been charged in the past. If only this fully funded the Metrorail system this would be a win-win for both Metro and customers, however there is always more work to be done.

How does this revenue from years ago compare to what WMATA gets today? Localities certainly pay much more as well, which may or may not also stay in line with previous years. What lingering Metro questions do you have?

Cost of a Metrorail trip of 1mi up to 30mi of peak/off-peak, capped and uncapped cost

  • WMATA Metrorail fares:

Friday, January 30, 2015

Time to say goodbye to APX Labs

I write this blog post with a bit of a heavy heart, as this is going to be a significant change for me. Friday, February 6th, 2015 is going to be my last day working full-time at APX Labs as an Associate Systems Engineer. After thinking about where I am, what I want to do, and how I might want to get there, I have decided that it is time to move on to a new position. After leaving APX, I will be starting work as a System Engineer at Berico Technologies here in Reston, VA.

I started working at APX in December of 2012. I had just graduated from RIT two quarters early, and was excited about coming back into the NOVA area in order to be a) working full-time at a "real" job - not just an internship - and b) back around family and friends. Originally working on an engineering contract that we had for the DoD, I eventually transitioned over to handle all internal IT services for the company and juggled half a dozen other items that really didn't fit with any one role.

Over the two-plus years that I have been at APX I have met and worked with a number of incredibly-talented engineers who have immense amounts of knowledge relating to what they do and are motivated to explore technology to further our collective knowledge. We have worked stressful, long hours, and played just as hard. Many friendships have been created and hopefully will be sustained long into the future. It has been a privilege working around such individuals and I hope that some of their enthusiasm and drive has rubbed off on me and will help me in the future as I progress through life. I hope to keep in touch with everyone as I am neither moving out of the area nor straying from technology as my day job.

The transition from one position to the next will be a significant change for me, but hopefully career-wise it will be for the best. I will be moving from general IT support for APX to doing Linux systems engineering for Berico. This work, focused around both company infrastructure and some contracts, should allow me to be able to experiment and develop my automation and virtualization skills by working with new techologies on the market and applying those to real situations. Whether I am able to utilize anything from oVirt to OpenStack or Ansible to Chef, I hope I will be able to learn and develop higher-level administration/engineering ideas that I will be able to leverage later on and throughout my career.

While transitioning from one thing to another is always hard, the future looks bright. A new opportunity awaits me to do work for others and to better myself holistically. I have nothing but the utmost respect for everyone at APX, and I wish them and the company all the best wishes for future endeavors. Much work has been done, but there is even more still left to do!

Onward and upwards,

Tuesday, January 27, 2015

January Blizzard of 2015

(Disclosure: I am in no way, shape, or form a meteorologist or forecaster)

Snowfalls from January 26th/7th as of 5:08pm (via @NWS)
Large storms that impact high-population areas of the world (especially the US) always draw some sort of response from the general public. Whether that response is to note that the forecast was right-on or not, it is always heard. Yesterday's storm that impacted the northeast US coast was no different - it was expected to bring over a foot of snow to areas of the country north and east of Philadelphia, all of New Jersey, New York City, and all the way up to Boston and Maine.

Forecast snowfall by the NWS New York on January 26th - ~20 hours out
Just a day out ahead of the storm, there were multiple forecasts of 18-24" for the National Weather Service (NWS)'s New York office region, which did not seem incredibly outlandish. The two main global models - the GFS and the Euro - were consistently saying that there would be significant amounts of snow for the entire region, although the two models differed by more than a few inches for the greater New York City area. These same models also showed 2-3 feet of snow for the Boston metropolitan area and elsewhere north, which they have indeed received.

During and after the storm, the system tracked a small amount east, and New York was left with a maximum of about 11-12 inches of snow, significantly under the "official" amounts of snow forecast. While still a very sizable amount of snow, for which the city appeared to sufficiently prepare, it left hundreds or thousands disappointed as the "potentially historic" storm did not happen. This was for a multitude of reasons, both technical and human, but the outcome of this is purely one of human communication: how to accurately portray forecasts and the inevitable uncertainty that arises in trying to predict the future. The forecast presented by anybody - NWS or other - does not accurately portray the entire bell-curve of potential options that there are.

No forecast is going to be 100% correct, but it may be 100% wrong, depending on how events turn out. The NWS was one of the institutions forecasting this storm that went with snow estimates leaning on the high side; it appears that one of the models was favored more than others, due to its past reliability. This gets us to the underlying issue: forecasts are purely scientific and educated guesses. These forecasts are very scientific and analyzed thoroughly, but there's no getting around the fact that we cannot completely predict future weather. We try as we might, but there is an inevitable amount of uncertainty and unknown potential. One of the key takeaways of this storm is the fact that this known unknown needs to be presented along with the forecast, in order to allow people to have full knowledge of the situation.

Several different stations had their own takes on what the chances of snow in the NYC area would be. (@brianstelter)

All in all, the forecast as a whole was pretty decent. The total numbers were spot-on from around Long Island (30 inches!) to Connecticut and Boston, but lacking on the west side. As the image above shows, everybody has their own take on the information available to them. If there is even uncertainty expressed to forecasters, then this information needs to be translated to the public as well. Why do we make the assumption - or at least make it seem - that a forecast will fall into a single category or gradient, when we know those are only the most statistically-likely options?

Capital Weather Gang forecasts, including "boom" and "bust" percentages
One of the groups doing weather forecasting in the DC region is the Capital Weather Gang, who have been mastering the displaying of a forecast along with the potential for going above or below what is predicted. Especially when presented in a highly-readable format, this helps to present the general reader with a range of snow/precipitation that might be expected, as well as the explicit knowledge that the final result may exceed or fall below what is written. Some type of system appears to be needed on a larger scale and adopted by others in order to help convey the challenges of forecasting. While we wish that forecasts could be completely accurate all the time, that is simply just not the case.

Forecasting, like any science, is challenging, There will be good days, and there will be bad days. But at the end of each of these days, we need to evaluate what went right and/or wrong, and use those as lessons to feed future runs of different forecasts. Most forecasters do not do what they do for the money - they are in the job, just like any other, because it is what they like: the work, the challenge the reward, whatever it may be. A forecast that bust because an area only received a foot of snow instead of two or three feet? There's some sort of humor in that. They forecast that it would snow a significant amount, and it did! Were there wording mistakes by people all around when calling for the "potential historic" storm to topple the charts of largest storms? Yes, that's for certain. The potential risk in a forecast is part of what needs to be accomplished in order to provide people with information on how to go about their day - just like we say that there might be a 75% of rain on a given day - and that can then become part of their decision process in order to how to proceed.

All in all, the overall forecasts for this system were not bad. Areas that were forecast to receive snow received snow, and those that didn't generally did not. The global and mesoscale models did what they were supposed to and provided information to guide forecasts, and forecasters did what they thought was best. Mother Nature did what she does best and threw a couple of curveballs. No entity is infallible, but every entity has the opportunity to see where improvements may be made and execute on those. Presenting forecasts to the public is no different.

Additional reading material

Saturday, January 24, 2015

Well well...

Last post: November, 2012. Today: January, 2015. It's been an interesting 2-some years.

As much as I thought I might never bother to see this site again, another part of me apparently wants to attempt to start blogging again for unknown reasons. Much has changed in the 2-ish years since I last put up anything, and hopefully one change for the better is my writing abilities or my abilities to convey information.

For various reasons I probably will end up moving this to another provider - Medium, Blogger, etc. - there really isn't much of a point in hosting this myself. However in the meantime while I do that, I might end up putting up a couple of posts. Potential topics include a little bit of fascination about DC/NOVA history, weather, aviation, or various other items that end up being interesting to me.

EDIT: And yes, I'm laughing and shaking my head about these previous posts more than any of you are. Some (scratch that - a lot) of them are pretty terrible, both content-wise and layout/flashiness.

Sunday, November 4, 2012


Bet you can’t do….this! The coolest self-pic in the world.

Thursday, August 16, 2012

Video, video, video...

Well I do apologize for my blog for being down. The server it was on was being moved physically to a new location, so it needed time for relocation. Though I doubt many people actually bother to follow my sea of links, anyway :-P

Vote the Humans Out” – Hank for Congress has a new campaign ad out

Aurora” – a song “dedicated to those who lost their lives and were affected by the tragedy in Aurora, Colorado,” recorded by Hans Zimmer

The Aurora shooting, while tragic, was also a way for the area ER departments to make use of their mass trauma practice and training

Technology just keeps evolving, and it’s becoming evermore part of our daily lives. ArsTechnica gives us a review of 35 years of personal computers. At the same time, there’s a profile of 30 years of the Commodore 64.

In older news, a power blackout in India cut power to approximately 600 million people in several states. Power was restored a day or two later, the FBI goes digital, Microsoft released a new Email service, and cats still love lasers.

And finally, a poetic look at exploration.