Tuesday, July 14, 2015

The Silver Line is Open: What About Performance?

Correction: The Rail On-Time Performance chart has been updated with one that includes the Blue Line. I apologize to all BL riders that I left off. Thanks to @keck41 for alerting me.

With the WMATA Metrorail Silver Line Phase I now open, the overall system appears to still be functioning but is hobbling along more now than ever. In customers eyes delays have increased, breakdowns and single-tracking are more frequent, and crowding has grown. The recent WMATA proposal to optimize the rail system calls for cutting trains on the Orange, Silver, Yellow, and Green lines while Blue line service capacity would be increased. In order to attempt to optimize trains running through the Rosslyn tunnels - each with a theoretical maximum of 26 trains per hour - WMATA wants to reduce the total number of trains running through from 26 to around 23 trains per hour, and increase the percentage of 8-car trains being run on the lines. One of WMATA's lines of reasoning for this change is train spacing:
“All those trains really have to hit that tunnel almost perfectly timed. If one thing goes off, it can throw off the entire system,” said [Sherry] Ly. Running fewer trains would improve reliability, she said. The proportion of eight-car trains on the affected lines also would increase (most rush hour trains currently carry only six railcars).
Is this the proper question that needs to be solved, however, and really why Metro is making this change? The ultimate reason, more likely, is that Metro badly needs to reduce the number of train cars in service in order to increase performance again. Here's why:

Rail Line Performance

WMATA Rail Line On-Time Performance, 2009-2015
To get an idea of the overall picture, this graph shows the WMATA on-time performance for all rail lines dating back to mid-2009. Visible starting July 2014 is the addition of the Silver line with an average on-time performance through March of 86.5% - the lowest average on-time performance of all 6 rail lines between opening July 2014 and March 2015.The introduction of this new line correlates incredibly well with significant decreases in performance of the Orange and Green lines, and less so for the Red and Yellow lines. For the most part through the entire chart, all lines are reasonably close in performance numbers relative to each other.

Rail Equipment

To take a look at another part of the equation, have you ever wondered about the reliability of the rail cars that Metro runs? Not all of them have the best history. While they are the oldest, the 1000 series actually does not have the worst mean distance between delay (MDBD) measurement (granted, they have other critical issues). At an average distance of 30,788mi, the 4000-series cars are the least reliable in the WMATA fleet*. This is only all too clear on the graph below. The second worst-performing series in the fleet are the 1000s, closely followed by the 5000s at 50,028 an 51,456mi respectively. Given the data available, there does not appear to be a statistically-significant correlation between the Silver line opening and rail car performance itself. There is potentially a decrease in performance of the 6000-series cars after the Silver line opening, but there isn't enough data yet to know if that's a temporarily blip or a new long-term trend.
WMATA Rail Car Mean Distance Between Delays (by Series) - 2009 - 2015

WMATA Rail Car Series Mean Distance Between Delays, Oct/2013-Mar/2015

Rail Line Impact

Available from WMATA is the Daily Service Report (DSR), which tracks all customer delays of over 3 minutes. The data pulled out in the graph below are the trains that were canceled or otherwise did not operate in the Metrorail system for the Orange and Silver lines, known as those that Did Not Operate (DNO). A train might be marked as DNO for a variety of reasons, but one main cause can be attributed to not having enough cars to make a full train. For instance, if there are too few or an odd number of cars available to make a train consist, that train is not able to run. One other way to get a DNO train is if WMATA only has 1000-series cars and no others to act as the head and rear of the train; thus, the train would not be able to run.
Number of trains that "Did Not Operate" on both Orange and Silver, 1/2013-6/2015

WMATA "Did Not Operate" raw data, 1/2013-6/2015
Up until June 2014, the graph shows a nice even distribution of DNO on the Orange line, hovering between around 8 per month. This all changes when the Silver line becomes operational, however. At that point (July 2014), the numbers appear to spike fairly dramatically. Vienna station hits a maximum of 50 DNO trains in the month of June 2015, and there are several months of over 20 trains that did not operate as scheduled from Vienna. Impacts on New Carrollton are visible as well, but not nearly as severe, likely due to station location and yard storage.

What do these numbers mean? They're pretty abstract and don't necessarily mean anything by themselves.

Lets, for example, take a look at February 2015, where 30 trains originating at Vienna were marked DNO. Given current operating rates, we can estimate ~33% of these DNO trains should have been 8-car, and the rest should have been 6-car trains. Per WMATA documentation, an acceptable capacity of each rail car is approximately 100 people. These 30 trains that did not operate account for 200 total rail cars, which could hold approximately 20,000 riders. These 20,000 riders them must crowd onto other trains, thus increasing the crowdedness of each of those. Overly-crowded trains then exacerbate platform overflows when train offloads inevitably happen, and so on.


Not enough train cars
There are a multitude of reasons why performance on the rail system is suffering, but there are only a couple of conclusions that can be drawn from this subset being discussed. One, WMATA does not currently have enough train cars to run the full system including the Silver line. The Silver line (phase I) requires 64 train cars to operate. These 64 were to have been supplemented by the 7000-series cars, of which only 16 are currently in revenue operation due to various delays. WMATA dictates that the current system requires 954 train cars to operate and the agency has approximately 1126. Approximately 25% of the total cars in WMATA's possession are designated as being out of service for maintenance, spare, or other reasons. With 954 cars required, this drops the operating spare ratio to only 15% and sometimes even lower.

With fewer cars available to put into service when others break, it is more likely to see domino effects. Fewer trains may be available to run at peak hours due to equipment constraints (and thus marked DNO, like when the 4000-series cars were taken out of service), In addition, each car is likely to have less available time for maintenance meaning the chance of breakdown increases over time.

Train cars do not meet reliability targets
For years, WMATA has targeted a reliability level of 60,000 track miles between delay. Of the series measured, only the 2000/3000 and 6000-series cars have managed to average above the target over time. Lower train car reliability is hurting performance and increasing problems/offloads. This will hopefully change as the 7000-series cars enter service, but it is far too early to be able to know that for sure. Each offload has a ripple effect through the system and can cause significant customer delays, so it is imperative for WMATA to increase car reliability and keep it as high as possible.

What WMATA is Doing

By cutting service on Orange/Silver/Yellow/Green, WMATA is doing two things: a) allowing train schedules to synchronize to make running the system easier, and b) reducing the number of train cars needed to run service.

Greater Greater Washington goes into great detail about the train spacing and its effect on the system, so I won't touch on that here. The reduction in cars needed will mean that more cars have more time for routine maintenance and light repair. If WMATA takes care of their cars, it may even be possible to increase the reliability of the rolling stock thus decreasing brake issues, door problems, and ultimately offloads. If the reduction in service is approved, that is one outcome to look forward to. Meanwhile, expedited delivery and testing of the 7000-series cars is necessary in order to properly run the Silver line and the Metrorail system with the cars necessary.

While these actions do not make up for the larger issues plaguing the system, it is one possible way to return a tiny bit of normalcy in the currently-troubled history of the system. This action could be the right thing to do, however the road back to a happy and healthy transportation system is going to be long and tumultuous.

* Data used for this measurement is monthly performance data from July 2008 to March 2015.

Friday, July 3, 2015

Washington Metrorail Q2 CY2015 Daily Status Report Overview

[updated] This post has been updated on 4 July 2015 to reflect the location of stations where trains did not operate as scheduled.
The Washington Metropolitan Area Transit Authority (Metro) was created by an interstate compact in 1967 to plan, develop, build, finance, and operate a balanced regional transportation system in the national capital area. Metro began building its rail system in 1969, acquired four regional bus systems in 1973, and began operating the first phase of Metrorail in 1976. Today, Metrorail serves 91 stations and has 117 miles of track. Metrobus serves the nation's capital 24 hours a day, seven days a week with 1,500 buses. Metrorail and Metrobus serve a population of 5 million within a 1,500-square mile jurisdiction. Metro began its paratransit service, MetroAccess, in 1994; it provides about 2.3 million trips per year.
The Washington Metropolitan Area Transit Authority (WMATA) operates a significant number of trains on hits network every day, and accrues delays and other operational issues over time. WMATA creates a log of all of these delays - the Daily Service Report (DSR) - that affect their trains and cause passenger delays of three minutes or more at a stop. Raw data used for this and other analyses can be found on the WMATA website at: http://www.wmata.com/rail/service_reports/viewPage_update.cfm.

This post sets out to take the three months of data from April to June 2015 (Q2 CY2015) and point out trends and other data of interest, especially changes in delay rates over time, delay types, and how these have affected each of the system's six rail lines during the time period of interest.

Due to the nature of the DSR document, the fact that anything is contained within it means that it had a negative impact on train and/or passenger operations. Each entry in the DSR notes the time of the delay, the line and station, and the cause. Most entries include the delay length, but a percentage of reports do not include this data. For this and all data that does not include an estimate, the delay length has been left blank. This has the possible consequence of not including significant or other delays in the data matrix, however it is not reasonable to fill in the data as this would induce error and potentially-incorrect information.

Data Comments
Due to the format of some data, the classification of delay has been changed in certain circumstances:
  • When an incident occurred at a station served by multiple lines, the incident was classified under the line that caused the delay. For instance, if an Orange line train was offloaded due to brake problems at Ballston (affecting the Silver line as well), the incident was classified under the Orange line heading
  • Instances of "passenger struck by the train" have been reclassified as a Medical Emergency
  • "Track obstruction" has been classified under Track Problem
  • "Tresspassers" and "unauthorized persons on the track" have been classified under Police Activity
  • Escalator outages causing station trains to skip stations and induce delays have been classified as Operational Problem
  • Trains that are noted as "expressed" (meaning they it did not service a particular station or stations) do not have a delay in minutes associated with them, as none is provided by WMATA. There is certainly a delay associated with a train skipping a stop, however it would not be proper to assign a delay to these without further study.
  • Total delay, April 2015: 572 delays, or 3763 minutes
  • Total delay, May 2015: 520 delays, or 3519 minutes
  • Total delay, June 2015: 698 delays, or 4709 minutes
  • Total delay, 2Q CY2015: 1790 delays, or 11191 minutes (avg 7 min/delay)
The top five causes of Metrorail train delays caused a total of 9048 minutes of delay throughout the system, or 81% of the total delayed time. The graph below shows the breakdown of the top five types of delay over each of the three months as measured by total minutes of delay.
WMATA Monthly Total Minutes by Type of the Top 5 Causes of Delay for Q2CY15

  • Track problems caused 831 minutes of delay (13 minutes per delay on average)
  • Door problems caused a total of 1352 minutes of delay (8 minutes per delay on average)
  • Operational problems caused 1020 minutes of delay (7 minutes per delay on average)
  • Trains that did not operate caused 3146 minutes of delay (6 minutes per delay on average)
  • Brakes problems caused 2699 minutes of delay (6 minutes per delay on average)

Most Frequent Cause of Delay
The most frequent causes of delay during the second quarter of 2015 were trains that did not operate as scheduled. This can happen if the rail cars to run the scheduled train are unavailable, if there is no operator, or for other unspecified reasons. For instance, one example is "a Franconia-Springfield-bound Blue Line train at Largo Town Center did not operate, resulting in a 12-minute gap in service." (6/15/2015). Each train that did not operate caused a service gap of approximately 6 minutes on average.

The graph below shows the time of day for each of the three months in the quarter when there were trains that did not operate as scheduled. The number of trains that did not operate rose overall from 124 in April to 128 in May, and then increased to 223 in June. The increase in this type of delay event can likely be partially attributed to all 100 4000-series train cars being pulled from revenue service during the quarter (http://www.wmata.com/rider_tools/metro_service_status/advisories.cfm?AID=4986) over issues with doors opening while moving. Without the 100 4000-series cars to run on the rail system WMATA would have to also decrease the number of 1000-series rail cars in service, since those are not allowed to be either the head-end or rear pair of cars on any train consist. 50 of the 4000-series rail cars have been put back into service as of this writing in early July.

WMATA Monthly Totals of Trains that "Did Not Operate" per Hour (0hr-23hr) for Q2 CY15
Diving into the data a little, we cane take a look to see which stations are affected by these trains marked "Did Not Operate." The majority of the increase in these trains is centralized on three stations throughout the system - one on Orange, and two on Green: Vienna, Greenbelt, and Branch Avenue. All three of these stations are termini, and are where the vast majority of Metrorail trains will originate on their trips. For the quarter, these three stations account for 48% of all trains that did not operate.
WMATA Trains Per Station Classified "Did Not Operate" for Q2CY15. Grouped by Metrorail line color.

The second most-frequent cause of train delay is those that have been expressed past a station. This terminology means that the train does not stop, and instead continues on to the next station on its route. This is usually performed by WMATA "for schedule adherence/improved train spacing" on the line, especially during recovery from an earlier incident causing train bunching.

WMATA Service Delays by Type per Month for Q2 CY15
Most Delayed Line
For all three months in the quarter, the Red line "won" with the most number of delays for a total of 516 over the quarter. Note also that the Red line is the single longest in the system, and thus proportionally it would be expected to have more delays than the rest.
WMATA Service Delays by Line for Q2 CY15

Least Delayed Line
The least-delayed line throughout the Metro system during Q2 of 2015 is the Blue Line, with 165 total delays. In addition to having higher headways (thus, fewer trains) than other lines in Metrorail system, the significant portion of the Blue line route is shared with others (Yellow, Silver, and Orange) which all have higher number of delays which could potentially impact Blue line performance as well.

Longest Delay
There were multiple delays (five) that tied for the longest delay/impact to customers of up to 60 minutes. The first noted delay during the second quarter occurred on Friday April 3rd, 2015 at 8:02am:
A Glenmont-bound Red Line train outside Cleveland Park experienced a brake problem. A Glenmont-bound Red Line train was used to recover the incident train. Both trains were moved to the platform and offloaded. Several trains were single tracked around the incident train and several trains were offloaded and turned back for schedule adherence/improved train spacing. Passengers experienced delays up to 60 minutes.
 Of the same length was another delay on Thursday April 16th, beginning at 9:39am:
A Greenbelt-bound Yellow Line train at Fort Totten reported a track problem. Green Line service was suspended between Prince Georges Plaza and Georgia Avenue until approximately 10:30 a.m. Several trains were offloaded and turned back for schedule adherence/improved train spacing. Passengers experienced delays up to 60 minutes.
The next delay of 60 minutes was not until a month later on Thursday May 21st at 3:36pm:
Orange/Silver/Blue line train service was suspended between Eastern Market & Minnesota Ave and Eastern Market & Benning Road due to a power problem. Shuttle bus service was provided. Power was restored at approximately 4:50 p.m. Passengers experienced delays up to 60 minutes. 
Monday, June 1st brought the next 60-minute delay at 7:21pm:
A Glenmont-bound Red Line train at Shady Grove was delayed due to a signal problem. Passengers experienced delays up to 60 minutes.
 Rounding out the quarter, a 60-minute delay was reported on Monday, June 29th beginning at 5:34pm:
A Shady Grove-bound Red Line train outside Farragut North experienced a brake problem. A Grosvenor-Strathmore-bound Red Line train at Metro Center was offloaded and used to recover the incident train, which was moved to the platform and offloaded. Several trains were single tracked around the incident train and several trains were offloaded and turned back for schedule adherence/improved train spacing. Passengers experienced delays up to 60 minutes.
While the link to the raw WMATA DSR data is provided above, the processed and analyzed dataset is not included. This can be provided upon request by contacting me @srepetsk on Twitter or at blog [at] srepetsk [dot] net via email.

While I have done my best to ensure the accuracy of the data contained in this post, it is provided as-is with no guarantee that it is correct. Please feel free to contact me you note an error, or suspect there may be an error.

The author of this content does not speak for and is in no way associated with the Washington Metropolitan Area Transit Authority other than being a rider of the system.

Sunday, March 1, 2015

Review: Pockethernet, "The Swiss army knife of network administrators"

Being a systems & network administrator, sometimes I get to deal with making, crimping, repairing, or otherwise generally troubleshooting cables, switches, and basic networking functions. Generally one might use a Fluke or other network tester in a situation where you might need to test any of the above - for example, a user calls in and says they need a long Ethernet cable made and you don't have any non-bulk cable handy. Unless you do this a lot, a Fluke isn't something that you're going to just have sitting around and so you might try visually looking at a cable or plugging it in to network hardware you have laying around to see if the cable works. This process can be cumbersome, inefficient, and take far more time than it otherwise should.

Well, not anymore. The Pockethernet, a network testing device stemming from an Indiegogo funding campaign early in 2014, is attempting to change all of that. Brought to life by German duo Zoltan Devai and Jeroen van Boxtel, the device is aiming to be an all-in-one device to make every IT person's life easier. I received mine in the mail a few days ago and have been playing with it to develop some first impressions since then. Past the video, I get into the unpacking and the Pockethernet's uses!

The original Indiegogo campaign closed in March of 2014 with 370% of the original $50,000 goal, and there have been several delays since the devices were supposed to have been shipped. This is not the end of the world, and I only had thoughts that I might not get the device a couple of times. There have been over 20 updates sent out to backers between the campaign closing and now, keeping us up-to-date with the status of production, shipping and several challenges that were encountered along the way. Producing the devices took longer than expected with a couple revisions and retooling required, regulations taking longer than expected to be sorted out for all the governing bodies worldwide, and even running into difficulties dealing with UPS/DHL.

All of that is now nearing completion, and devices have been and are being shipped. After being unpacked, this is the case that the unit and cables came in.
Pockethernet in case
In addition to the case above, two documents were included: a welcome document, and a manual about the device itself. The first page of the information document is dedicated to regulatory information (legally-required text, yadda yadda yadda). and the rest of it talks about the device itself.
Pockethernet case opened
The Pockethernet comes with a dual-function dongle so that you can either test to find the wire map of a cable or turn what you have into a loopback cable, an Ethernet cable, and a short USB cable for charging.
Front of Pockethernet 
The device itself is contained in a machined metal case, and feels incredibly sturdy. The front and back of the device are closed with clear plastic so that you can see through to the other side. The PCB is the main component inside the case, and the 750mAh battery is situated just on top of it.
On this front side are the Ethernet jack itself and 5 LEDs:
Pockethernet - Front
Pockethernet - Rear
Rear of Pockethernet
It appears that the logo and regulatory information printed onto the outside of the case is reversed from where it sound be. When seated so that the Pockethernet name and logo are visible as such in the photos above, the PCB is seated near the top of the case with battery hanging below it. When flipped to match the documents in the manual, the regulation information is right-side up - and who wants to see that?
Wiremap/Loopback dongle
This little dongle comes with the Pockethernet; there is not yet any documentation to go along with it, so the "Loopback" and "Wiremap" labels on it are all there is to go on. Those are fairly self-explanatory, however.
Pockethernet start screen
When starting up the Pockethernet Android application this is the first and main screen that appears. Down the left column are icons for the 4 cable pairs, Power over Ethernet, link, and DHCP/network information. Selecting Connect at the bottom of the screen starts the connection between the phone and the Pockethernet. To do a regular cable test on a disconnected Ethernet cable, the Wiremap portion of the dongle lets you test to make sure that the cable generally works.
Wiremap test function
Once connected to the cable that you want to measure or test, selecting the Refresh button at the bottom is what gets the test to actually run. The screen above shows the cables and the which pins are connected where. This test took place with a regular straight-through Ethernet cable, so the wiremap function served to simply reflect the signal back to the Pockethernet from the opposite end. Per the backer forums, the O S and C listed next to each pair appear to stand for Open, Shielded, and Closed in relation to the electric circuits.
Wiremap TDR results
Time-Domain Reflectometry (TDR) is used to find the length of the cable that you have connected. In this case, I tested with a 2-meter cable. The reporting is wonky and shows different lengths for the pairs, which is a known issue. After a few weeks, the developers hope to release an updated version of the application with fixes for bugs reported in that timeframe. Given this, the results appear to properly show that the cable is not "connected" to another device (Wiremap function, not Loopback) but is returning results.
Cable gigabit information
This screen isn't the most helpful, but it appears to show that all four pairs of cables are connected and which is the transmit or receiving side.
Online network test
One of the neat features of the device is that you can do an Online test with it, meaning you can make it connect to your network, receive a static or DHCP address, and ping up to three addresses. A quirk of the device appears to be that you have to run a test on the cable first before you are able to run the Online test. Each test to verify connectivity with the server(s) that you specify takes approximately 3 seconds so you can get enough information to know what is working and what might not be.
Offline network testing
If you have a cable toner, the top half of the Offline testing screen is useful to you. Like any other tone test, you can try to use this to find a cable that you're looking for in a patch panel or other scenario when otherwise finding it would be difficult. The bottom half lets you test the Bit Error Rate (BER) of a cable using the Loopback side of the dongle. Testing at 1000Mbps is currently broken (another known issue) but 10/100Mbps works just fine.
Generate report screen
A neat feature built into the software is that you can send a PDF of the latest test result that you have generated to an email address in order to export some data. It piggybacks off of whatever mail program you have set up on your phone, but it gets the job done. The report itself is basic and includes all the information shown above minus the graphics, so it isn't incredibly useful yet. Styling and image work will go a long way to make the report feature more useful. Yes, that is a button in the bottom-right that lets you attach a photo to the generated report, perhaps of the drop that you are testing on or other verification of the job you've just completed.
Generated PDF of test results
The report, while nifty, isn't incredibly useful yet. The wiremap and TDR information, as shown above, appears to be inaccurate/incorrect displaying Short/Open when Closed would be the proper information to have been included and a > 0m length result. Again, this is a software bug that can and hopefully will be be fixed with an app update.
Finally, the software has a couple basic options that you can change as well as lets you see the device and software information. There isn't all that much here yet, but I expect more to come in future software updates.

While playing with the device for a short while I discovered and read about a few odd things it does, some of which can be worked around.
  • There isn't yet a quickstart or how-to guide for using the product, although there is an active backer forum where questions are being answered very quickly. If you generally know something about networking and what you are doing, then it probably will not take you long to get up to speed and make use of the device.
  • The Pockethernet cannot be used while connected to its USB charger.
  • Before you can start an Online test, you are required to run a generic cable test, presumably to let the software know that it's connected to something.
  • The cable length results are sometimes correct, but not in other cases. This is a known issue which should be fixed shortly.
  • Testing the BER at gigabit speeds currently does not work, although it does at 10 and 100Mbps. As before, this is another known issue reported on the forums.
So...does it really do that?
The device as delivered is a solid piece of hardware that accomplishes many of the goals it set out to accomplish, but the software is not completely there yet. For something that I put money down before there was even a physical product, it holds great promise through continued software updates to fix bugs and add additional functionality. One item specifically mentioned that will be added is LLDP/CDP support, and other features to be added include VLAN support, SNMP/web configuration, and 802.x support.

The Pockethernet is still in its early stages, but looks to only have a bright future ahead with continued software updates. At $150 (not yet generally available for purchase) or even slightly more, it is a steal compared to the larger and more-expensive network tools for basic network testing and will continue to chip away at the higher-end tools as more functionality is added. Having only possessed the device for around a week I have already had to use it several times to verify cables both at home and at work, so it is already paying for itself. If you can steer around the various bugs that exist currently, I would definitely suggest looking to purchase the Pockethernet when it becomes available. Otherwise, wait a few months for those to be ironed out and new features added, and then purchase yourself a Pockethernet. I can certainly see myself never needing another network testing tool as long as I have my Pockethernet by my side. 

Friday, February 27, 2015

Metrorail Fares Through the Years

How much would you pay for a trip on the Washington Metro Area Transit Authority (WMATA) Metro? How much are you willing to pay for a trip on Metrorail, and how fair is that charge? As the 2015 year continues and discussions for the WMATA 2016 fiscal year start up, there have been preliminary discussions about raising the base fares for both rail and bus, although final budget votes won't happen until later in May. With this new upcoming potential round of fare increases in the works (even if the fares don't go up, local jurisdictions are being asked to pay more than last year), it is important to know some of the history of Metro fare increases since inception of the system in order to be able to judge current and future actions of the system.

Metrorail Fares
Since the beginning of the Metrorail system in 1976, the transit agency has used a distance-based fare - that is, if you ride further on the system, then you will pay more. The current fare structure includes a base fare and two zones, with base prices varying whether you enter the system during rush or off-peak hours. As the thinking goes, if you take a longer ride on the system then you are using more of its resources, and thus need to pay more to help maintain the system. This has generally worked over the years to keep fares as the largest percentage of income for the system and Metrorail as the second-largest system in the US, however there has always been significant funding required from the DC/MD/VA governments in order to keep the system fully funded. Fares have not gone up every year or even every other year on a regular basis, which got me interested in taking a look at the longer-term Metrorail fare history.

The first thing of interest is the cost of a Metrorail fare over time. There are two numbers which we can generally rely on since the inception of the system: minimum boarding charge, and maximum fare. Per the fare calculation that Metro uses when you swipe into the system, the boarding charge is the minimum fare that you are charged in order to get through the gate. This has always been the case since 1976, and continues to this day. The graph below is a chart of several pieces of information: a) max possible trip distance, b) max peak fare, c) off-peak max fare, d) peak boarding charge, and e) off-peak boarding charge. This is a lot of information, so just bear with me!

a) max possible trip distance: My understanding of this is that it is the maximum possible trip distance in the system on a line taking the composite "as the crow flies" measurements between each on said line. I'm not 100% sure this interpretation is correct, however. That said, in the graph below it represents the maximum distance you could travel on a line on Metro. This distance hasn't increased much since the work done in the mid-90's on the Red, Blue, and Green lines.
b) max peak fare: The maximum amount of money you are allowed to pay when riding the system during peak hours. This is denoted here in orange.
c) off-peak max fare: the same as the peak fare, except the maximum you can pay during off-peak hours
d) peak boarding charge: The minimum amount it will cost you to step into the Metrorail system during peak hours
e) off-peak boarding charge: The minimum amount it will cost you to step into the Metrorail system during off-peak hours

Minimum and maximum potential costs of riding Metro
A line was superimposed over the three sets of most interesting data - max peak fare, the peak/off-peak boarding charges. The off-peak max fare wasn't graphed as this lower fee was essentially "created" during the 2000's to let off-peak trips cost less. As the graph shows, all three sets of data are fairly regular and conform to a linear increase, though the max peak fare increases at a higher rate than the boarding charges. If we took just this data, it would appear to show that since the beginning of the Metrorail system there have been (somewhat)-regular fare increases and those coming this year would be no different.

Wait, what? Fare increases over time might be....normal?

Why apparently, yes! Just as milk, gas, bananas, and other products go up in price over time, the same appears to happen to a trip on the Metro system, thanks in no small part to this little thing called inflation. When requirements for living go up (food, gas, etc.), so do salaries, equipment costs, and a myriad of things Metro needs to function, and thus some of those charges are rolled into a Metro ticket. This is usually ok as wages increase over time, so patrons have more money to spend and thus available money to spend on the system.

Now you might say that yes inflation exists and things cost more because of it, but your cost in the Metrorail system has gone up significantly more over time and has not tracked with inflation. This, however, is where my thinking was previously incorrect and appears to be much more fair to the consumer than previously thought.

This second chart shows three sets of data: ratio in change of the peak boarding charge, ratio in change of the off-peak boarding charge, and Y/Y change in national US inflation. Lines were plotted to match all three, and (unsurprisingly) the peak/off-peak boarding charges (comparing a fare cost to that of the previous change) track very closely to each other at approximately a 7% increase in cost per fare increase. Inflation, since 1976 when the system opened, has actually slightly decreased at 10% year-over-year (i.e. 9.10% in 1975, 5.40% in 1990, and 1.60% in 2014). However, overall yearly inflation since 1975 itself clocks in at just over 4% (double-digit inflation of the late 1970's tied with the 1979 energy crisis did not help at all). Plotted over time, this yearly 4% increase in inflation ends up being more than the 7% fare-per-fare increase as applied to the Metrorail system:

Yearly changes in metro cost vs inflation
Given this publicly-available data and due to how the math appears to work out, adjusted for inflation, the minimum peak fare of $0.55 in 1976 would cost approximately $2.29 today (a ~315% increase!), whereas the actual is now $2.15. Fares have certainly increased every few years, but the data do not show that fares have risen any higher than than inflation but in fact slightly lower that inflation.

The results and the last graph are a bit confusing, but dipping back into derivatives every once in a while is certainly nothing short of a fun adventure. In the end, the rate of change of Metrorail fare increases does not keep up with the rate of change of inflation. All things being equal, the average Metrorail rider actually gets more out of the system now than an average rider back in 1976 would. This is especially interesting now that the system is almost 40 years older and significantly larger and covering significantly more area. The maximum peak fare that used to only be worth 14.7 miles is now one that is worth 15.7 (a meager increase), while the maximum possible trip distance has grown significantly from 17.30 miles to 29.60. Running the Metrorail system has grown more costly over time as it expanded, but customers of the system have been able to take advantage of cheaper fares compared to what has been charged in the past. If only this fully funded the Metrorail system this would be a win-win for both Metro and customers, however there is always more work to be done.

How does this revenue from years ago compare to what WMATA gets today? Localities certainly pay much more as well, which may or may not also stay in line with previous years. What lingering Metro questions do you have?

Cost of a Metrorail trip of 1mi up to 30mi of peak/off-peak, capped and uncapped cost

  • WMATA Metrorail fares: http://www.wmata.com/about_metro/docs/History%20of%20Fare%20Increases%20FY2015.pdf

Friday, January 30, 2015

Time to say goodbye to APX Labs

I write this blog post with a bit of a heavy heart, as this is going to be a significant change for me. Friday, February 6th, 2015 is going to be my last day working full-time at APX Labs as an Associate Systems Engineer. After thinking about where I am, what I want to do, and how I might want to get there, I have decided that it is time to move on to a new position. After leaving APX, I will be starting work as a System Engineer at Berico Technologies here in Reston, VA.

I started working at APX in December of 2012. I had just graduated from RIT two quarters early, and was excited about coming back into the NOVA area in order to be a) working full-time at a "real" job - not just an internship - and b) back around family and friends. Originally working on an engineering contract that we had for the DoD, I eventually transitioned over to handle all internal IT services for the company and juggled half a dozen other items that really didn't fit with any one role.

Over the two-plus years that I have been at APX I have met and worked with a number of incredibly-talented engineers who have immense amounts of knowledge relating to what they do and are motivated to explore technology to further our collective knowledge. We have worked stressful, long hours, and played just as hard. Many friendships have been created and hopefully will be sustained long into the future. It has been a privilege working around such individuals and I hope that some of their enthusiasm and drive has rubbed off on me and will help me in the future as I progress through life. I hope to keep in touch with everyone as I am neither moving out of the area nor straying from technology as my day job.

The transition from one position to the next will be a significant change for me, but hopefully career-wise it will be for the best. I will be moving from general IT support for APX to doing Linux systems engineering for Berico. This work, focused around both company infrastructure and some contracts, should allow me to be able to experiment and develop my automation and virtualization skills by working with new techologies on the market and applying those to real situations. Whether I am able to utilize anything from oVirt to OpenStack or Ansible to Chef, I hope I will be able to learn and develop higher-level administration/engineering ideas that I will be able to leverage later on and throughout my career.

While transitioning from one thing to another is always hard, the future looks bright. A new opportunity awaits me to do work for others and to better myself holistically. I have nothing but the utmost respect for everyone at APX, and I wish them and the company all the best wishes for future endeavors. Much work has been done, but there is even more still left to do!

Onward and upwards,

Tuesday, January 27, 2015

January Blizzard of 2015

(Disclosure: I am in no way, shape, or form a meteorologist or forecaster)

Snowfalls from January 26th/7th as of 5:08pm (via @NWS)
Large storms that impact high-population areas of the world (especially the US) always draw some sort of response from the general public. Whether that response is to note that the forecast was right-on or not, it is always heard. Yesterday's storm that impacted the northeast US coast was no different - it was expected to bring over a foot of snow to areas of the country north and east of Philadelphia, all of New Jersey, New York City, and all the way up to Boston and Maine.

Forecast snowfall by the NWS New York on January 26th - ~20 hours out
Just a day out ahead of the storm, there were multiple forecasts of 18-24" for the National Weather Service (NWS)'s New York office region, which did not seem incredibly outlandish. The two main global models - the GFS and the Euro - were consistently saying that there would be significant amounts of snow for the entire region, although the two models differed by more than a few inches for the greater New York City area. These same models also showed 2-3 feet of snow for the Boston metropolitan area and elsewhere north, which they have indeed received.

During and after the storm, the system tracked a small amount east, and New York was left with a maximum of about 11-12 inches of snow, significantly under the "official" amounts of snow forecast. While still a very sizable amount of snow, for which the city appeared to sufficiently prepare, it left hundreds or thousands disappointed as the "potentially historic" storm did not happen. This was for a multitude of reasons, both technical and human, but the outcome of this is purely one of human communication: how to accurately portray forecasts and the inevitable uncertainty that arises in trying to predict the future. The forecast presented by anybody - NWS or other - does not accurately portray the entire bell-curve of potential options that there are.

No forecast is going to be 100% correct, but it may be 100% wrong, depending on how events turn out. The NWS was one of the institutions forecasting this storm that went with snow estimates leaning on the high side; it appears that one of the models was favored more than others, due to its past reliability. This gets us to the underlying issue: forecasts are purely scientific and educated guesses. These forecasts are very scientific and analyzed thoroughly, but there's no getting around the fact that we cannot completely predict future weather. We try as we might, but there is an inevitable amount of uncertainty and unknown potential. One of the key takeaways of this storm is the fact that this known unknown needs to be presented along with the forecast, in order to allow people to have full knowledge of the situation.

Several different stations had their own takes on what the chances of snow in the NYC area would be. (@brianstelter)

All in all, the forecast as a whole was pretty decent. The total numbers were spot-on from around Long Island (30 inches!) to Connecticut and Boston, but lacking on the west side. As the image above shows, everybody has their own take on the information available to them. If there is even uncertainty expressed to forecasters, then this information needs to be translated to the public as well. Why do we make the assumption - or at least make it seem - that a forecast will fall into a single category or gradient, when we know those are only the most statistically-likely options?

Capital Weather Gang forecasts, including "boom" and "bust" percentages
One of the groups doing weather forecasting in the DC region is the Capital Weather Gang, who have been mastering the displaying of a forecast along with the potential for going above or below what is predicted. Especially when presented in a highly-readable format, this helps to present the general reader with a range of snow/precipitation that might be expected, as well as the explicit knowledge that the final result may exceed or fall below what is written. Some type of system appears to be needed on a larger scale and adopted by others in order to help convey the challenges of forecasting. While we wish that forecasts could be completely accurate all the time, that is simply just not the case.

Forecasting, like any science, is challenging, There will be good days, and there will be bad days. But at the end of each of these days, we need to evaluate what went right and/or wrong, and use those as lessons to feed future runs of different forecasts. Most forecasters do not do what they do for the money - they are in the job, just like any other, because it is what they like: the work, the challenge the reward, whatever it may be. A forecast that bust because an area only received a foot of snow instead of two or three feet? There's some sort of humor in that. They forecast that it would snow a significant amount, and it did! Were there wording mistakes by people all around when calling for the "potential historic" storm to topple the charts of largest storms? Yes, that's for certain. The potential risk in a forecast is part of what needs to be accomplished in order to provide people with information on how to go about their day - just like we say that there might be a 75% of rain on a given day - and that can then become part of their decision process in order to how to proceed.

All in all, the overall forecasts for this system were not bad. Areas that were forecast to receive snow received snow, and those that didn't generally did not. The global and mesoscale models did what they were supposed to and provided information to guide forecasts, and forecasters did what they thought was best. Mother Nature did what she does best and threw a couple of curveballs. No entity is infallible, but every entity has the opportunity to see where improvements may be made and execute on those. Presenting forecasts to the public is no different.

Additional reading material

Saturday, January 24, 2015

Well well...

Last post: November, 2012. Today: January, 2015. It's been an interesting 2-some years.

As much as I thought I might never bother to see this site again, another part of me apparently wants to attempt to start blogging again for unknown reasons. Much has changed in the 2-ish years since I last put up anything, and hopefully one change for the better is my writing abilities or my abilities to convey information.

For various reasons I probably will end up moving this to another provider - Medium, Blogger, etc. - there really isn't much of a point in hosting this myself. However in the meantime while I do that, I might end up putting up a couple of posts. Potential topics include a little bit of fascination about DC/NOVA history, weather, aviation, or various other items that end up being interesting to me.

EDIT: And yes, I'm laughing and shaking my head about these previous posts more than any of you are. Some (scratch that - a lot) of them are pretty terrible, both content-wise and layout/flashiness.