Feeds:
Posts
Comments

Autonomous Vehicle Update

Tesla is having an interesting year with its Autopilot capabilities.   Its software, supported by cameras, radar and ultrasonic sensors, can automatically drive on a highway including changing lanes and adjusting speed depending on surrounding traffic. It can find a parking space and parallel park itself. It even comes when called. The Summon feature allows you to call your car from your phone and have it meet you at your front door. How many times have you wished for that feature when you can’t quite remember exactly where you parked in that huge shopping center lot on a rainy cold day?

Joshua Neally is quite happy with his Model X. On his way home in Springfield, Missouri, he suddenly felt something like “a steel pole through my chest.” He let his Tesla autonomously take him more than 20 miles to the off-ramp near the closest hospital. Neally had to drive himself the final stretch, but he survived a pulmonary embiolism, which kills 50,000 people a year, 70% in the first hour.

On the other hand, Joshua Brown had a different experience with his Tesla Model S in Williston, Florida. His car’s sensor system failed to distinguish a large white 18-wheeler crossing the highway on a bright sunny day. Joshua was killed when the top of his car was torn off when it went through the trailer.

Tesla cars are not fully autonomous: they require the driver to remain alert and keep their hands on the steering wheel. Neally’s car got him safely 20 miles and off the freeway, but he had to manually drive to the emergency entrance. Brown was apparently watching a Harry Potter movie at the time of the crash; at least the movie was still playing when the car finally stopped after snapping a telephone pole a quarter mile from the accident.

Uber in Pittsburgh is offering rides in self-driving Ford Fusion cars. There is still a driver who has to be ready to take over at any time. The current software will not automatically change lanes, like when a delivery van is double-parked on a city street. The driver needs to take control to safely go around the obstacle.

Like today’s cruise control, you must engage the self-driving features, and manually braking or accelerating disables the self-driving features.

Surprisingly, bridges can be a problem for autonomous vehicles. One might think that nothing could be simpler than a bridge: a straight set of well marked lanes with a definite right side and minimal distractions like pedestrians and cross streets. But that simplicity is a big part of the problem. Without so many environmental clues like buildings, it is harder for the car to figure out exactly where it is. This is one reason Pittsburgh was chosen for this first Uber self-driving car rollout. Pittsburgh also has four seasons, an irregular grid of roads, and lots of potholes, and Carnegie Mellon University robotics center. The robotics center providing the self-driving hardware and software.

We are in the awkward learning phase with autonomous vehicles. Having someone behind the wheel to take over is problematic. It is hard enough to stay focused on driving when you are actually driving, let alone when you have very little to do.

The last word:

If you have one of the current crop of semi-autonomous vehicles, you must pay attention at all times. Tesla emits an audible chime when it detects that the driver does not have his hands on the wheel. Uber currently has a engineer behind the wheel, and another ride-along engineer in the passenger seat monitoring the car and taking notes.

If you are riding in a semi-autonomous vehicle, like the Uber cars in Pittsburgh, do not distract the driver. Treat the driver like you would in a normal vehicle.

On the other hand, Teslas had been driven more than 130 million miles in autopilot mode prior to its first fatality, compared to one fatality every 94 million miles in the US and every 60 million miles worldwide.

Always remember, these software systems are still in beta. You won’t trust your business to beta software; don’t trust your life to beta software!

Comments solicited.

Keep your sense of humor.

Walt.

This is the last of a series of four blogs about quantum computing. The first was a quick view into the weird world of quantum physics, followed by a look at was capabilities a quantum computer would have. Last time we looked at the significant implications a quantum computer will have on data security.

Here are some examples of where we are today:

  • MIT has created a five-qubit quantum computer (Science, March 2016).
  • D-WaveThe Canadian company D-Wave Systems shipped its first quantum computer in 2010 with 128 qubits. D-Wave has announced the availability of the D-Wave 2X system with more than 1,000 qubits. On the other hand, there are lots of skeptics about whether what D-Wave is creating is really a quantum computer. It clearly uses some quantum capabilities, but if I understand it correctly (a big “if”), it deliberately avoids using superposition and quantum entanglement. If so, it will limit their quantum computer capabilities. However, they are way ahead of anybody else in actually building a computer based on quantum concepts.
  • The Australian company Shoal created a quantum computer for the Australian Department of Defense in 2014, and then spun off QxBranch as a quantum computing software company working closely with D-Wave.
  • California-based Rigetti Computing is developing fault-tolerant gate-based solid state quantum processors that they claim is highly scalable and low cost.

The largest prime number successfully factored by a quantum computer is 56,153 (241 x 233). At this point, the time to factor that 16-bit number with a quantum computer is longer than the time to factor it on modern classical computer. Today’s modern encryption keys have up to 768 bits.

How long will it take to have a quantum computer large enough to threaten today’s network security practices? It took 25 years from the first digital computer (Eniac, 1946) until computers were powerful enough and ubiquitous enough to create the first primitive networks (ARPANET, 1971). It took another 19 years until Tim Berners-Lee created the first web browser in 1990 and the formation of the Internet. It won’t take that long to get real quantum computers, maybe twenty years but more likely closer to ten.

The last word:

You don’t have to worry about a quantum computer cracking your network security and exposing all of your secrets. Yet. You do need to remain vigilant because sometime there will be such a quantum computer. You can bet the first such computers will be deep inside organizations like the US National Security Agency (NSA) and similar organizations in other countries.

For those of us who lived through or even participated in the space race, one of the significant differences between the US and the USSR was openness: the US did everything in public, the USSR did everything in secret and only revealed their successes after the fact. These days, the NSA acts much more like the Soviet model, keeping a tight rein on security products, and with the ability and inclination to prevent technologies from entering the marketplace until the NSA is ready.

Our first indication of the existence of a powerful quantum computer may be the successful attack on a nation’s political, military, financial or physical infrastructure.

Comments solicited.

Keep your sense of humor.

Walt.

This is the third in my Quantum Computing series. Last time I indicated that the two main areas in which quantum computers will be very much faster than digital computers are searching and factoring. The average individual and almost every company will rarely need the incredible searching capabilities of a large quantum computer, and I suspect that specialized companies will be created in the next twenty years or so to handle the special cases that do come up.

But everyone should be concerned about a quantum computer’s capability to almost instantaneously factor large numbers. To understand why, we have to understand how encryption is actually done in our digital world. There are two main types of encryption: symmetric-key encryption and public-key encryption.

Symmetric-key encryption uses the same key for both encryption and decryption. Both parties must have the same key in order to communicate securely. We use symmetric-key encryption every day: whenever you see https:// (instead of just http://) in an Internet URL, you are using symmetric-key encryption. Symmetric-key encryption algorithms are subject to various attacks based on the process that generates the symmetric key, but the biggest issue is how to securely transmit the key between the two parties. That key sharing usually involves some form of public-key encryption.

Public-key encryption has two keys: a published key that anyone can use to encrypt messages and a private decryption key that only the receiver has. While the process to generate the pair of keys is mathematically amusing, the key component of the process is to multiply two very large prime numbers together. The public key is that product plus another calculated value based on the two primes that form the product. The security of public-key encryption is based on the time it takes with current digital technology to determine the two prime factors that are used to compute the public and private keys. This factoring time goes up exponentially as the key gets larger, so that today by the time some organization could break a code, the data would be of historical interest only.

However, Peter Shor at MIT has shown that a quantum computer could factor large numbers easily, meaning very quickly. Oops.

Quantum computers could end the predominance of public-key encryption algorithms, which would also seriously impact symmetric-key encryption.

The ideal cryptographic protocol is the “one-time pad,” first described in 1882. A one-time pad is a random secret key that is only used once. It was original an actual pad of paper that contained the key, or more likely a set of keys. The pads were then physically carried from one party to the other, often using clandestine methods. The KGB created one-time pads that could fit inside a walnut shell. Today, most symmetric-key algorithms create a one-time use key in real time for short-term use. For example, https security creates a new key for each communication session. If you are communicating with https to multiple sites at the same time from the same browser, each of those communications has a different symmetric key.

Quantum computing to the rescue: Quantum Key Distribution (QKD) allows for the distribution of completely random keys at a distance solving the biggest security problem with symmetric-key encryption. A key generator creates two entangled qubits (perhaps a photon), and sends one to each party. Each party looks at one attribute of the qubit (say polarity), and assigns a bit (0 or 1) based on the attribute value. Due to entanglement, both parties will get the same answer. Repeating this process can generate a symmetric key of any appropriate length, normally no larger than 256 bits.

More importantly, the parties can tell if anyone intercepted their qubit. If someone does intercept the qubit distribution, that interception will disturb the entanglement and the keys will no longer match. Problem solved.

The last word:

Perhaps one of the strangest potential uses of a quantum computer is to simulate quantum systems. This will allow scientists to understand what is really happen at the quantum level, and could perhaps lead to amazing new products in a variety of areas.

We have no idea what the quantum computer will eventually do. Howard Aiken was a pioneer computer engineer and the original conceptual designer behind the IBM Harvard Mark I computer in 1944. In 1952, he said, “Originally one thought that if there were a half dozen large computers in this country, hidden away in research laboratories, this would take care of all requirements we had throughout the country.”

Comments solicited.

Keep your sense of humor.

Walt.

Last time we talked about the weird quantum universe, where particles can be in more than one state at the same time and can be entangled with another particle at great distances. What does this have to do with computers?

Since their beginning in the late 1930s, all digital computers are based on binary digits (bits), which can have a value of zero or one. Early computers might have a few thousand bits in a box the size of a refrigerator. Your smart phone probably has more than 100 billion bits in a box that fits in your pocket. Digital computers work because the computer can tell a particular bit to be a zero or a one, and it will stay in that state until it is explicitly told to change. This is a good thing, and a bad thing. It is good because we can rely on the state of that bit. It is bad because it takes time to set the value of a bit, and later to look at it to see what that state is. Because of that, there are problems that would take computers millions of years to solve.

The quantum world is a little different. Niels Bohr who won the 1922 Nobel Prize in physics for his work in quantum theory famously said, “If quantum mechanics hasn’t profoundly shocked you, you haven’t understood it yet.”

Quantum computers use quantum bits, called qubits. Qubits are very small; they must be to have quantum effects. They can be individual atoms, photons, or electrons. Unlike a digital computer, each qubit can be in multiple states simultaneously. The effect of that is that a quantum computer with a single qubit could solve two simple problems at the same time, one with two qubits could solve four simultaneously, and one with ten qubits could solve over a thousand simple problems simultaneously. This is called parallelism. A 30-qubit quantum computer would be at least a thousand times faster than today’s conventional desktop computer.

To solve very complex problems, the digital world has created computers to take advantage of problems that can be solved using parallelism techniques. Computer companies have created computers with dozens to thousands of separate processors, each doing the same function across a large array of data. IBM’s Blue Gene is one example of a massively parallel supercomputer. SETI@home (Search for Extraterrestrial Intelligence) is an example of a distributed parallelism approach, often called grid computing, where 290,000 computers around the world can work on the same problem across the Internet.

Blue Gene cost $100 million to build. SETI takes advantage of idle time on lots of computers, so it hard to put a dollar cost on it. In ten years, it managed to cover only about 20% of the celestial sphere.

One problem with a qubit is that if you look at it, it assumes a specific state and stays there; superposition disappears. This is where entanglement comes in: you can indirectly look at the value of qubit’s attribute.

Armed with a quantum computer, what could you do? In general, not what you do today on your digital mainframe, desktop, or phone. A quantum computer would be lousy at balancing a checkbook, creating a spreadsheet, or writing a book. (Unless you were trying to prove the infinite monkey theorem: a monkey sitting at a typewriter hitting the keys at random would eventually write the complete works of William Shakespeare.)

Quantum computers will be very good in two areas: search and factoring.

  • If you have a lot of data to search (like the SETI problem), a quantum computer could look at all of the data at once.
  • While it is easy for a digital computer to multiply two large numbers, even if they each have hundreds of digits. Determining the factors of a very large number, like one with 500 digits in it, is very difficult and can take thousands of years with today’s fastest computers. A quantum computer could factor a 500-digit number in seconds.

OK, a very quick search would be neat but Google is fast enough for me. And I never even want to think about factoring a 500-digit number, so I don’t care about that.

Next time we will explore why you should care.

The last word:

As you probably know, the US is far behind most of the rest of the world in cell phone and credit card technology. Almost every point of sale in Europe takes EMV cards, smart cards that store the card data in integrated circuits embedded in the cards or in RFID chip. If you are wondering what the acronym “EMV” stands for, it is ”Europay, Mastercard and VISA,” the three companies that created the standard.

There are two security concerns you should be aware of:

  1. The RFID chips in your smart credit card can be read over a distance of up to three feet. Anyone close to you for a second could be a thief stealing your card. Always carry it in an RFID shielded envelope, wallet, or purse.
  2. Because non-smart cards are a pain for the retailers in Europe, they don’t like to take them and when they do they are not very careful. They won’t bother checking the signature on the card, even if you have signed it with “CHECK ID.” Anyone who steals your physical card will have no problem using it. Your credit card company should cover your costs, but it will be a huge pain. Never carry debit cards – you have very little protection against someone emptying your bank account, and they can do it in a matter of minutes.

Comments solicited

Keep your sense of humor.

Walt.

In my last post I indicated that the fifty-year trend of doubling computer processing power every two years was coming to an end, and growth with the current integrated circuit technologies is expected to become almost flat by 2018. One of the possible ways around this limit is quantum computing. Quantum computers take advantage of the strange world of quantum mechanics.

This is the first in a series of planned posts to give a brief overview of quantum physics (with no math), a discussion of quantum computers, the potential impact of them especially as it pertains to data security, and the current state of development of quantum computers.

Most of us spend nearly all of our time in a Newtonian physical environment. If we drop something, it falls, and we can predict how long it will take to fall. We can throw a baseball with, depending on our skill, a reasonable expectation that we know where it will go. We know how long it will take for something to travel between two points, subject to understandable issues like traffic jams, construction, or weather. This works whether we are trying to drive to the grocery store or land a spacecraft on Mars.

But outside of this environment, things may work differently. At really high speeds, relativity has an effect. NASA scientists were able to measure this, admittedly very small, effect on the Apollo missions to the moon: the astronauts aged a tiny fraction of a second less the rest of us stuck on earth due to the speed of their travel to and from the moon. At very small sizes, quantum effects take over, and some of these effects may seem to be just weird.

One of these weird effects is the uncertainty principle: if you measure one aspect of a quantum particle you will not be able to measure another. At the quantum level, a policeman could determine how fast you were driving, but could not then determine where you were.

Superposition is the principle that a quantum object is actually in all possible states simultaneously all the time, until something checks it. You have probably heard of the “Schrodinger’s cat” thought experiment: place a live cat in a steel box along with a sealed vial of a highly poisonous gas, a hammer, and a very small amount of radioactive material, plus a very sensitive radioactive decay sensor (e.g., Geiger counter). If the sensor detects the decay of a single radioactive atom during the test period, a relay causes the hammer to fall on the vial of gas and the cat dies. If not, the cat lives. We cannot know the state of the cat until we actually break into the box and look, and in the quantum world the cat is both dead and alive, until we observe it. If you are a cat lover, then substitute “evil squirrel” for “cat” in this paragraph. Like most thought experiments, this one is technically flawed; a cat is not a quantum entity and “looking” at one cannot change it’s living state.

Quantum entanglement may be one of the strangest concepts in the weird quantum world. If two particles are entangled, then if you measure one property of one of the particles, then the same property of the other particle is identical. Measuring the property of the one particle fixes the property for the other particle, so if it is also measured it will always be the same as the first particle. Somehow, the second particle learns the result from the first particle, and this happens instantaneously over any distance. When the particles are far enough apart, this “learning” travels faster than the speed of light. However, you do not learn anything about another property of the entangled pair. Any other property still has all of its possible values on each particle, and there is no relation of that value between the particles. For example, assume these entangled quantum particles had two properties: color (red or green) and shape (cube or sphere). Then if you measured the shape of one of the particles and found it to be a cube, then the other particle would also be a cube. However, the color of each particle could be red or green independently.

As the current computer chip technology gets faster, the individual components on the chip of necessity get smaller and closer together. At the speed our current chips operate, the speed of light is too slow. We need to have components as close to each other as possible to minimize the time it takes a signal to travel between components. At this time, components may be the size of a molecule. Chips with this level of component density face three main challenges: high defect rates, process variation, and quantum effects. The first two challenges are simply the result of the closer tolerances in the actual manufacturing phase. The industry has been pushing these limits for the past couple of decades, resulting in the continuous development of more exacting, and more expensive, manufacturing methods as well as more stringent testing processes.

Quantum effects are not as easily overcome. They can cause high leakage currents and large variability in the device’s characteristics. As transistors get smaller, quantum superposition will make it impossible to distinguish between the two states of a transistor. The real barrier to unlimited performance increases in computers using today’s technology is the reality of the quantum universe.

The last word:

The two main national political conventions / carnivals are over. The Republican convention in Cleveland had no fence around the convention center, and fewer than 25 arrests for the entire four days. The Democratic convention in Philadelphia had an eight-foot tall fence around the convention arena, raised in part to twelve feet after the first night. There were more than 50 protestors cited and removed by police during the first day alone, and those protests were literally drowned out by several severe thunderstorms with gale-force winds.

Please do not jump to any conclusions from these data points. As I have said before, apparent correlation does not imply causation. The different results may be more due to the difference between Cleveland and Philadelphia, or the weather, then the political parties.

One clear distinction that is caused by the parties is the presence, or absence, of the US flag within the convention. For the first two days of the Democratic convention, there were no US flags within the convention center, a sports arena. The Democratic committee had all of the US flags removed, including the huge one that has always hung above the center in the arena. Apparently, it interfered with the balloons.

Alas, the American voter is left with the sad choice between a clown and a criminal.

Comments solicited.

Keep your sense of humor.

Walt.

In 1965, Gordon Moore predicted that computer chips would double in performance every two years at low cost, now known as Moore’s aw. He also predicted that chips they would eventually be so small and inexpensive that they could be embedded in homes, cars and what he called “personal portable communications equipment.” In 1968, he and Robert Noyce founded NM Electronics, soon renamed to Integrated Electronics and then shortened to Intel. Integrated circuits were just in their infancy at that point, with companies trying to deal with the technical issues of putting even eight transistors in a single chip.

Of course, Moore’s law is not a physical law, but, thanks in significant part to the work of Intel, it has held remarkably true for 50 years. However, earlier this year Intel announced that it will not continue to keep up with Moore’s Law.

MooresLawThis issue is part science and part finance. As a result of the shrinking the size of microscopic transistors in a modern integrated circuit, these transistors are closer together. This causes two problems: heat and quantum effects. In order to achieve the desired high performance, these packed transistors generate a lot of heat. Too much heat can literally fry a chip, making it useless. Quantum effects cause their behavior to become unpredictable, not a desired trait in the way we use computers today.

The financial problem is due to the cost to produce these new integrated circuits. Today, each machine to “stamp” out chips costs about US$50M. Each future “generation” of chips will increase the design and production cost by up to 50%, meaning a new chip factory may cost US$10B to build.

What does this mean to you and your company? Probably not much in the short term. In fact, if you are lagging a little in your technology usage in your products, this may give you a chance to catch up.  Not surprisingly, chip manufacturing companies are working on several alternative solutions to continue to drive growth in semiconductors:

  • Carry on the current path. The real obstacle is simply money. For those cases where you actually need to get maximum performance from a single small package, you will likely to be able to get it. You may not like the cost, as there will be large production costs and smaller demand.
  • New technologies including spintronics, carbon nanotubes, and quantum computing. Intel plans to move from silicon-based transistors over the next 4-5 years.

For most companies, the real solution is distribution. We see new products every day with embedded processors connected to a network. For the past decade, cars have contained dozens of computers, each assigned to one function like brakes, cruise control, entertainment systems, and even door locks. As the car manufacturers move towards full autonomous vehicles, we are seeing integration of all the computers within a car into a single network, with additional computers added for new functions. We will over the next ten years see the cars themselves integrated into a wider network including other cars and traffic signals and monitors.

As you are looking at your future product plans, consider distribution both within your product and to the outside world as a way to expand capabilities and performance and attract new customers. Always remember that the Cloud is there to help.

The last word:

I was one of 23 thought leaders recently featured in Tenfold’s “23 Thought Leaders Answer: What’s Your #1 Tip for a Successful First Meeting with a Prospect?” You might want to check it out.

Comments solicited.

Keep your sense of humor.

Walt.

I feel a little lazy this week. We just got back from a very busy spring with two cruises: one from Vancouver around Hawaii and back to Vancouver on Holland America and the other from Amsterdam to Budapest on a Viking Longboat. I strongly recommend both cruises. Between the trips we attended a family wedding at the other end of the state.

But cyber attacks continue unabated. Some of the more recent “highlights:”

  • On top of the 191 million voter registration records stolen in December 2015, another 56 million records were captured and exposed, probably by a Christian right-wing organization. While a lot of information in your voter registration file is public, it does include name, address, birth date, and party affiliation. Organizations can use that information to correlate other non-public information including voting history, religious affiliation, charity donations, work place, income level, political leaning, and some really strange information like whether you like auto racing.
  • State Farm had information on 77,000 customers stolen by a hack into DAC Group, a large advertising agency in the US and Canada. While it currently seems that no financial information was stolen; it is likely that these customers had their email addresses stolen. What is instructive, however, is that this information was stolen from a development server at DAC. Security on development systems is often not as comprehensive as on a production system, and one of the reasons to have a development system is to confirm that any enhancements have not impacted data security before the software moves to the production environment. You should never use production data in a development environment. DAC should have known better.
  • A Japanese travel agency, JTB Corp, had personal information for almost 8 million people. One of JTB group companies experienced a targeted email attack, and an employee opened an attached file, which infected their server.
  • On the lighter side, the Cowboys Casino in Calgary, Canada, was attacked and personal information on less than 2,000 customers and staff were stolen. You parents told you not to gamble.

These are just a few of dozens of attacks in June 2016. If you are not having trouble sleeping, check out Norse real-time threat intelligence. This shows a small sub-set in real-time of network attacks based on their service and port. This does not include email or other application-level or OS-level attacks.

The last word:

For those of you in the United States, enjoy the Fourth of July and think about the freedoms we have here.

A number of people we met on the European cruise were from the UK, and this cruise was just before the BREXIT election. Most of them were concerned that the UK might vote to leave. From my perspective, it is past time for the UK to leave the EU. The EU bureaucrats control far too much of what each individual country and company must do, down to specifying the size and shape of wine bottles. These bureaucrats all seem to be socialists. As a result, the growth of the European economy is in last place compared to Africa, Asia, North and South America. However, the European economy is growing faster than the economy of Antarctica.

In 1992, “everyone” predicted dire consequences for the UK economy when it refused to abandon the Pound and move to the Euro. In 1990, the UK entered the European Exchange Rate Mechanism, a prerequisite for adopting the Euro. The UK spent over £6 billion pounds trying to keep its currency within the narrow limits prescribed by the EU, but, led by Prime Minister Tony Blair and his successor Gordon Brown, finally ruled out conversion to the Euro in 2007. One of the best moves in recent UK history.

Before the BREXIT vote, the UK was the fifth largest economy in the world. Do you really think a European company will cease to trade with a UK company because they are no longer in the EU?

Comments solicited.

Keep your sense of humor.

Walt.