Feeds:
Posts
Comments

Archive for the ‘DARPA’ Category

Long haul truckers move a lot of America’s goods. You see the eighteen wheelers on the Interstates and you know those guys, and ladies, have been driving hours every day to get their load from point A to point B over distances of up to 3,500 miles. Often, when you are outside of a major metropolitan area on an Interstate, 75% of the traffic is long haul trucks. The U.S. Bureau of Labor Statistics estimates that there are 1.5 million long haul truckers on the road today, expected to go over 1.8 million by 2020. There are about 200,000 job openings nationwide for long haul truckers right now.

Why aren’t unemployed or underemployed folk flocking to these jobs? The median annual wage is almost $38,000, with some long haul truckers making more than $58,000 a year. That’s not bad for a job that does not require even a high school diploma. One hurdle is getting a CDL (commercial driver’s license). It can take eight weeks and $6,000 to earn one. Then the job is not for everyone. Many drive by themselves most of the time, and they often live for weeks at a time in the back of their truck in a space the size of a closet.

But I believe we are coming to the end of the long haul trucker. I predict that in ten years there will be virtually no long haul truckers, except for moving vans. Why? The first place autonomous vehicles will really take off is in long haul trucking.

We are in the very early stages of autonomous vehicles that can safely get themselves to a destination with no human intervention. Remember how long it took before there was reliable air travel. The first scheduled fixed wing air service started in January 1914, flying from St. Petersburg to Tampa, Florida, ten years after the Wright Brothers flight in December, 1903. That might not have been considered reliable transportation by everyone. We are almost to that stage with autonomous vehicles. The first real demonstration of an autonomous vehicle in the 2005 DARPA Grand Challenge. At this point, four states and two cities allow autonomous vehicles on the highway (Nevada, Florida, California, Michigan, Washington DC, and Coeur d’Alene, Idaho). There are still lots of hurdles to overcome, including cost, liability laws, and public confidence before autonomous cars are common.

The lack of confidence is caused by just thinking about all the things that can go wrong in an urban environment: children playing, pedestrians, bicycles, and manned cars going through red lights, making strange turns, trying to park, or just being distracted. Over a recent six-month period, Google’s self-driving cars have gotten into four accidents in California where there were only 48 autonomous cars. Google claims that the autonomous vehicles were not the cause of any of them. If we ever get to Google’s end point of no drivers in any car at anytime, then in theory there would not be any accidents, and certainly a whole lot less than there are today. Getting there will not be easy.

InspirationBut back to the long haul trucker. Almost the entire route is on the Interstate. Most of the distractions and dangers are removed by the design of the Interstate itself. No red lights, pedestrians, bicycles, cross traffic, parking, …. The first autonomous vehicle license plate for a self-driving big rig went to a Freightliner “Inspiration Truck” in Nevada. It still requires a driver to handle turns at red lights and parking, so there must be a person in the cab.

But I view that as a short-term situation. I believe that within five years there will be thousands of autonomous big rigs on the Interstates, each pulling up to three trailers, and driving 24 hours a day at 65 to 75 miles per hour depending on the specific stretch of highway. No drivers, no one in the cab, and in fact no cab at all. Local truckers will take the trailers to a special lot near an Interstate on ramp, where an autonomous truck will be assigned to take that trailer to another special lot outside the destination city. There, another local trucker will pick up the trailer and drive the last ten to fifty miles.

In ten years there will only be autonomous long haul trucks on the Interstates. Near major metropolitan areas, those trucks will be shunted to the far left lane leaving the rights lanes for cars to jockey for space and exits without the trucks being the way. Imagine a line of trucks, each with up to three trailers, zooming long I80 south of Chicago at 70 mph and about 10 feet apart. When another long-haul truck pulls on the Interstate, the line of trucks will make space for the new truck.

The benefits to the trucking companies are obvious: no drivers to pay, no down time for the truck due to required rest breaks, and safer highways. The trucks will also be lighter, not having to have a cab with comfortable seats, air conditioning and heating, driver safety engineering and expensive manual controls. It will also be almost impossible to hijack an autonomous long-haul truck.

How do you back it up to pick up trailers, move it into a service bay for maintenance, or move it off the highway in an emergency? There’s an app for that. Someone can walk beside the truck for close in maneuvering using a tablet. The trick will be so that it only works when the person is close and has the “keys” to the truck.

But not moving vans. They will, I believe, still have actual drivers, if for no reason other than the families like to see a familiar face when the moving van pulls up to their new house.

The last word:

The impact will be on more than the over one million long haul truckers. Major truck stops along the Interstate will see their business change from servicing drivers to the rare servicing of an autonomous truck with a problem. It won’t be selling fuel: the trucks will be filled up before the journey with enough fuel to get to the destination point. You should expect to see many of these truck stops go out of business.

Along with the adult stores that also serve the truckers along the Interstates, like the Lion’s Den chain of 40 shops along the Midwest Interstates, some with gas stations.

Comments solicited.

Keep your sense of humor.

Walt.

Advertisements

Read Full Post »

Last time I set the stage for Mission Resilient Clouds (MRC). This time I will review the requirements released by DARPA, the Defense Advanced Research Projects Agency of the U.S. Department of Defense.

The DARPA MRC program has three main goals:

  1. Collective Immunity.
    Several hosts working in concert can achieve greater immunity to attack than any single host can. Multiple system “voting” solutions have been used for decades (I worked briefly on one at UC Berkeley in the early 1970s). They are, by definition, expensive. Because they try to keep multiple systems in sync they can introduce chaotic performance behavior in networks. If the hosts are identical, than any remaining vulnerability is easily exploited. If the hosts are different (e.g., different operating systems and maybe even chip sets), the operational and maintenance costs go up geometrically. The DARPA MRC program “seeks to produce collective-immunity techniques that are scalable, resistant to coordinated attacks, and offer tunable tradeoffs between attack resistance and overhead.”
  2. Cloud-wide “public health” infrastructure.
    The goal is to maintain mission effectiveness in the face of a coordinated attack. This means that the infrastructure must recognize an attack, assess the trustworthiness of each resource in the infrastructure, and continuously reallocate resources to provide sufficient trustworthy resources to support the mission. These resources are servers, network band-pass, and storage. It also requires that different tasks have different priorities. The MRC program seeks to produce technologies that share information across the infrastructure, and then make assessments as to the trustworthiness of individual resources. Once an attack is recognized, it must be diagnosed. When the cause is identified, patches or other workarounds need to be distributed across the infrastructure. Compromised resources need to be quarantined and potentially, in the case of a host or storage, regenerated. Most existing detection and correction technologies look for a single failure and deal with it. Since many cyber attacks are multi-step, the program looks to the “public health” infrastructure to determine if the attack is multi-step and then to take appropriate action.
  3. Manageable and taskable diversity and moving-target defense.
    Homogeneous computational systems provide rich targets for an attack, as a single attack can exploit vulnerability across many systems. A vulnerability shared across only a subset of the infrastructure can still be exploited and used as base for further attack. If such an attack can be exploited in a quorum of the hosts before detection, it can completely negate any collective-immunity schemes. As noted before, heterogeneous infrastructures are inherently more expensive to manage. The MRC program “seeks to develop techniques … that make all hosts appear different to the attacker while preserving a common manageability interface, thus allowing the “public health” techniques to effectively monitor and control the cloud.” The “moving-target defense” is to periodically reallocate tasks to different servers to make it more difficult for an attacker to “map” the infrastructure and launch coordinated attacks. This is much like the “random” frequency shifting some wireless systems use to make it harder to read or even jam the signal. Since any of these techniques can consume significant resources, the program specifies that these should be tunable, allowing the resource utilization to be changed to match the current threat level.

The MRC program has five technical areas of interest.

  1. Scalable and tunable innate distributed defenses.
    This technical area capitalizes on the virtually infinite size of the Cloud to erect defenses to penetration. Possible techniques include fault tolerant computing, proactive recovery, and proactive defensive techniques against a skilled cyber terrorist in the Cloud. “The goal is to create tunable and analyzable versions of these techniques that are capable of making tradeoffs between resource consumption and level of guarantee, and that offer predictable behavior in large-scale networked environments.”
  2. Shared situational awareness, trust modeling, and diagnosis.
    This technical area focuses on sharing diagnostic information and the diagnosis of large-scale, multi-step attacks.  This area builds on other DARPA projects that have focused on individual hosts.  This area also covers modeling the trustworthiness of individual resources within the Cloud, plus developing “attack plan recognition” techniques that can recognize Cloud-wide attacks in their earliest stages.
  3. Optimizing missions and resources.
    This technical area also takes advantage of the massively redundant resources in the Cloud. The goal of this area is to create the planning technologies that will continuously re-plan the allocation of resources to meet the mission’s needs.
  4. Mission-aware networking.
    Currently, the Cloud network technologies tend to maximize overall throughput.  This can sometimes leave individual nodes with too little or no communication.  The goals of this technical area are to allow the network to measure its own behavior and allocate resources according to the individual mission priorities.
  5. Manageable and taskable diversity.
    The technical area will develop new techniques to make each host appear different to attackers and to shuffle the allocation of tasks to hosts so the attacker perceives a moving target.

Individual companies can bid on one or more of the technical areas. DARPA is planning on spending real money on this effort, with multiple awards expected in each technical area, and each award ranging from $500,000 to $1.5 million per year.

DARPA has a fairly optimistic time line for this project, with integration and testing in 2015.

What will we all get out of this project?  Probably not what we expect, and probably a lot more than we can imagine.  DARPA is a military research group, but the first generally visible positive effects will probably be in the financial world.  As General Peter Pace, former Chairman of the Joint Chief of Staff, indicated: the job of the Joint Chiefs of Staff is to protect the U.S. from foreign attack, and that includes attacks on our financial systems. I expect that we will see, as early as 2012, some significant security improvements in Cloud-based financial systems, including your own bank-from-home activities, as a result of this MRC and similar private industry programs.

I also expect we may see in the same timeframe self-generating smart phone networks as another commercial outcome.  These will probably evolve as the natural extension of today’s Mobile ad-hoc networks.  As you send your team out into an area with no network infrastructure, the individual smart phones will use each other to eventually find their way to a real network hub.

And, just maybe, we can get a little ahead of the cyber terrorists out there.

The last word:

The Cloud is evolving rapidly. Today, the Cloud is not a good solution for a number of applications. The usual problems are security and performance. Performance issues can usually be resolved by spending money – it becomes a simple business decision that is getting easier to make every year. Security in the Cloud is also evolving, and fairly quickly. It you should not move something to the Cloud today, look again in six months. Watch your competitors and pay attention to the Cloud Service Providers that are advertising in your trade journals. You may not want to be the first in your industry to jump fully into the Cloud, but you do not want to be the last to get the financial and agility advantages of the Cloud.

Comments solicited.

Keep your sense of humor.

Walt.

Read Full Post »

As a network, the Internet is amazingly resilient, mostly. If you are in Philadelphia and accessing a website hosted in New York City, those messages may go directly “up the turnpike,” a distance of about 100 miles. If there is an Internet “accident” around exit 7, the Internet may reroute your traffic through Kansas City, adding about 2,000 miles to the trip. Unless you specifically look, you can’t tell this is happening. That minor detour to Kansas City may add about 90 milliseconds to the round trip (about a tenth of a second).

In the middle, the Internet is highly redundant with the ability to immediately react to load or routing problems. The real availability vulnerability is at the ends: the “last mile” problem.

In your own home, Internet outages are caused by:

  • Failures or misconfiguration of your own equipment like your computer, netbook, tablet, or router.
  • Your ISP (Internet Service Provider) like the phone company, cable company, or an independent ISP that piggy-backs on some existing connection to your house.
  • Your electric company, unless you have a UPS on your modem and other critical equipment.

Most people are not willing to pay for redundancy in these areas. After all, that is why Starbucks and Barnes and Noble exist, right?

These vulnerabilities exist no matter where your “last mile” points are.  What if your customers or employees are moving around in places with unreliable connections to the Internet, yet they need a reliable connection to do their job? Some examples:

  • A team working in some really out-of-the way place, like searching for a power source in the wilds of Alaska, a new medicinal opportunity in the Amazon region, exploring an ancient city deep in the desert or jungle, exploring caves, or working in mines.
  • A team providing aid in a crisis, where the existing infrastructure has been damaged or destroyed. Think of Haiti after the earthquake.
  • A situation where for political, military, or competitive reasons, someone is actively out to prevent you from communicating by physically destroying land lines, communication towers, or local data centers; launching cyber attacks; or jamming wireless signals.

The problem can be at the other end also. One example I discussed earlier was Amazon’s Public Cloud offering, Elastic Compute Cloud (EC2). EC2 had a serious service interruption lasting nearly two days in late April. Some customers who chose not pay for one of Amazon’s high availability options were down.

The Internet is a key component of the Cloud. The Cloud does not change any of your availability requirements; it just changes what you might have to do to meet them. Even when you have your own data center, your dependence on the Internet is still significant. You probably use the Internet to communicate with customers, partners, and traveling employees. In some cases, you may be paying for private networks. These are expensive, and in general very reliable; however, they are subject to the same kinds of last mile problems. Going to the Cloud adds one more piece to the puzzle: your Cloud Service Provider (CSP). This is actually a significant benefit, not an additional risk. Many CSPs have redundant power and multiple physical connections to multiple ISPs at each of multiple data centers. For a price, you can pretty much guarantee that the other end is always available. That price will be a lot less than it would cost you to do it by yourself.

The real Cloud vulnerability is the result of the vulnerabilities of the hosts within “your” Cloud. There is a high degree of trust among the hosts within the Cloud infrastructure. This trust tends to magnify problems, allowing any malware that gets into one system to, potentially, quickly propagate to many others. The result is that any vulnerability is multiplied, usually even faster in the Cloud than in normal networked environments. Today’s hosts are very vulnerable. With close attention to keeping them updated with the latest security patches and general security best practices, they can become reasonably secure. However, the Cloud dramatically amplifies any residual vulnerability in the hosts. The defenders have to protect against all vulnerabilities; the attacker only needs to find one.

The Cloud is everywhere – in both the commercial and public sector markets.

The United States government is moving to the Cloud. In November, the Office of Management and Budget (a cabinet-level office within the Executive Office of the President) announced a “cloud-first strategy.” This strategy encourages all federal agencies to consider deploying Cloud Computing solutions for the same reason the commercial world is embracing the Cloud: boost reliability at affordable costs.

In the defense sector, the Defense Information Systems Agency (DISA) is also embracing the Cloud. Dave Mihelcic, the DISA CTO, recently said he wants his organization to provide Cloud Computing services to the US Department of Defense (DoD). He said DISA is “uniquely positioned” to provide the DoDCloud Computing services for both classified and unclassified information.  DISA is likely to face stiff competition from the private sector, as Cloud Service Providers make their own bids for military Clouds.

Many DoD systems are controlled by computers, and these computers are rapidly becoming interconnected. General Peter Pace, former chairman of the Joint Chief of Staff, indicated in April that it was critical that the DoD be able to detect when these networks and systems were under attack, and, more importantly, defend these networks and systems without compromising the defense systems that rely on these networks.

General Pace is reporting a real dilemma: the cost of using the Cloud is at least an order of magnitude less than using the legacy DoD networks, but over 250,000 probes hit DoD networks every hour according to General Keith Alexander, a director at the National Security Agency and the commander of the U.S. Cyber Command.

Enter a traditional white knight: DARPA. The DoD Defense Advanced Research Projects Agency recently introduced a new project to create Mission Resilient Clouds. DARPA was created in 1958 as a response to the Soviet Union’s launch of Sputnik. While formed as part of the DoD, from the beginning DARPA’s role was to expand the frontiers of technology beyond the immediate needs of the U.S. military. Many prior DARPA projects have entered the general market. For those of us old enough to remember it, time-sharing, an early form of virtualization, was created as a joint project of Bell Labs, General Electric, and the Massachusetts Institute of Technology (MIT), funded by DARPA. This was followed by ARPANET, the real origin on the Internet.

All joking aside, Vice President Al Gore never said he invented the Internet. What he did say, and actually did as a US Congressman, was actively support a wide range of technology initiatives including those within DARPA that led to the Internet.

DARPA is working on building a Cloud-based network that can support military missions while under cyber-attack: the Mission Resilient Cloud. MRC is a companion program to an existing CRASH project (Clean-slate design of Resilient, Adaptive, Secure Hosts). CRASH aims to limit vulnerabilities within each host with a Cloud. MRC will focus on making the Cloud more resilient, damping down the impact of attacks instead of the existing amplifying effect.

DARPA released the project announcement in June, with proposals due July 25, with initial testing in 2015.

Next time I’ll look at the specific requirements and speculate on how, and when, we should see this capability in the general Cloud environment.

The last word:

One major vulnerability to the Internet is government. As we saw in the “Arab Spring,” some governments thought they could stop the crisis by stopping the Internet. Those attempts proved two things:

  1. Stopping the Internet at the country level has a huge economic hit on the country. Some leaders decided for that reason that they needed to reopen the Internet. Other leaders decided that if they were out of power they did not care about the country’s economy.
  2. While shutting down the Internet can reduce people’s ability to communicate, it seemed to be too late. People had learned that when they communicate with each other they can act cohesively and have a huge impact. In many countries the people found other ways to communicate, from using other technologies like cell phones to low tech means like shouting from the tops of buildings.

Many countries, including the United States, are making plans and passing laws to enable them to “legally” shut down the Internet to “protect the people.” It is time to ask who your government is protecting from what.

Comments solicited.

Keep your sense of humor.

Walt.

Read Full Post »