Last time I wrote about how the complexity of the presidential voting process in the US is an important defense against cyber-terrorism, and specifically the risk of a foreign power impacting or invalidating such an election. While security by obscurity is not usually a best practice, it has been successfully used in the past. If you make something complex enough, it becomes very difficult to break.

With each state or sometimes county determining the voting process using multiple vendors’ products, and almost all of it not connected to the Internet, it will be very difficult for a coordinated attack against an American presidential election. But the more than fifty different results of the votes across the country are not the final result.

The Electoral College provides another level of defense. While the ballot may indicate a specific candidate’s name, in a presidential election you are voting not for a candidate but for an elector who may promise to vote for that candidate when they “meet” in mid-December. (Today they don’t actually physically get together, but they vote on the Monday after the second Wednesday in December.) Maine and Nebraska apportion the electors based on the popular vote in the state; the other states are “winner-take-all.”

To win, a candidate must get a majority of the Electoral College votes cast, not the largest number of votes cast. Currently, that means that a candidate must have 270 votes to win. The president and vice president are voted on separately in the Electoral College. In case there is no candidate with a majority, the House of Representatives selects the president and the Senate selects the vice president.

The intent of the Electoral College was that the electors would discuss the various candidates and decide on a candidate, hopefully representing the views of the people who voted for the electors. Today, of course, the electors are expected to vote for the candidate they represented on the ballot. Twenty-four states have laws to punish an elector who does not vote for the candidate they represent, but there are no federal laws covering that situation. In 1952, the Supreme Court ruled that such state laws were constitutional and that each elector is a functionary of the state, not the federal government. In other words, Congress may not pass a law restricting what an elector can do.

In case of the death, serious illness, or withdrawal of a candidate who had a majority of the electors before the Electoral College meets, the electors could choose another candidate, probably of the same party.

If no candidate emerges from the Electoral College meeting with a majority, the House of Representatives goes into an immediate session. For this “election,” each state has one vote, and a candidate must receive 26 of the state votes. A minimum of 34 states must be represented in this vote, and only the top three candidates can be considered. The session continues until the house elects a president. The House has chosen the president in 1801 (Thomas Jefferson) and 1825 (John Quincy Adams).

Similarly, the Senate goes into session and chooses between the top two vote getters for vice president. Each Senator gets one vote, and at least 67 Senators must be present. A candidate must get at least 51 votes to win, and the sitting Vice President does not get a vote. The senate chose the vice president in 1837 (Richard Johnson, VP for Martin van Buren).

It is therefore possible to end up with a president from one party and the vice president from another party, especially if different parties control the House and Senate.

The last word:

The constitutional process for the election means that no third party candidate is likely to become president. If the third party candidate does not get a majority of the Electoral College votes, but gets enough to prevent any other candidate from getting a majority, the election goes to the House. The existing members in their lame duck session are not likely to choose someone who isn’t a member of one of the two major parties.

However, this year there is one realistically possible, although unlikely, scenario where a third party candidate wins. And it is not Gary Johnson; it is unlikely that Johnson can get any electoral votes even if he gets more than 10% of the popular vote. But Evan McMullin could. McMullin is a 40-year old ex-CIA overseas operator with Middle East experience, plus experience as an advisor to the House Committee on Foreign Affairs, was the chief policy director of the House Republican Conference and holds standard Republican Party views on most issues. He is a Mormon, is running for President as an independent in Utah, and is polling just 4 percentage points below Trump in this historically solid Republican state. If Mitt Romney, another Mormon, endorses McMullin, it could push him over the top. If McMullin wins in Utah, he gets six Electoral College votes, possibly enough to prevent Hilary Clinton from getting 270 Electoral College votes. He is also on the ballot in ten other states, but unlikely to win any of them. If so, the election goes to the House. The Republican Party controls 33 of the 50 state caucasus, so Clinton will not win. But Trump has burned enough bridges that he will likely get less than the 26 required state caucus votes. The House keeps voting, and must pick from the top three Electoral College vote getters: Clinton, Trump, or McMullin. At some point, the Republican leadership will realize that having someone with Republican views as President is better than having Trump as President.

The Senate gets to choose from the top two vice president candidates, Pence and Kaine. With 54 Republican Senators, Pence will most likely become the Vice President.

Comments solicited.

Keep your sense of humor.


With the news of targeted attacks against election systems, should the American voter be concerned that the upcoming presidential election could be manipulated or invalidated by a foreign government or cyber-terrorists?

In my view, the short answer is “no.” The reason is that the US election system is so complex and distributed such that there is no single attack point.

Our founding fathers deliberately set up this complex system because of the reality of the late eighteenth century. At that point, the newly born United States with its thirteen states was larger than any country in Europe, spanning over 1,000 miles as the crow flies. Messages and people could only travel at the speed of a walking horse or a sailboat. Just getting from New York City to Philadelphia would usually take at least three days. A voter in Boston would know very little about a candidate from Virginia. With slow communications in mind, the Constitutional Convention made a series of compromises in the summer of 1787 to balance the rights of the individual states and the power the national government needed to make a strong country. One of those compromises gave us our House of Representatives, representing the people, and the Senate, representing the states. For the current topic, the two important compromises were the creation of the Electoral College and giving each individual states control over the election. The result is that each state is responsible for the number of precincts and the number of polling places in that state, and the manner in which votes are cast and collected. While various voting rights acts have impacted the way precincts and districts are defined, the states still retain control over the voting process. In many states, this responsibility is passed down to the individual counties, so that voters could be using multiple voting mechanisms within the same state. A few states, like Oregon, have switched or are in the process of switching to mail only voting.

In the 2004 election, according to Election Data Services, there were about 186,000 precincts. Each precinct represented between 436 and 2,703 registered voters, with an average of around 1,100 registered voters per precinct.

ABC News reported that Russian hackers have targeted more than twenty state voter registration systems and have been successful in hacking four (Illinois, Arizona, Florida, plus another that I have not be able to identify).   Of course, these are the states that have actually made the effort to determine if they had been hacked. How many others have been attacked?

The US Department of Homeland Security (DHS) has offered to help state election boards stay secure, but as of this posting only eighteen states have expressed any interest in that help. DHS has also offered a more comprehensive on-site risk assessment. While four states have expressed interest, DHS is offering this service so late that it will likely only be able to provide one state this service before Election Day. This is yet another example of how the US government is late and slow to respond to cyber security threats.

These attacks may be more about stealing personal information for future identity theft activities, but it is difficult to determine the real purpose of these attacks if they are from Russia.

The good news is that these voter registration systems are not integrated into the actual voting systems. Even if a registration system is damaged, each state has procedures for a “provisional ballot.” You submit a ballot on Election Day, usually on paper in a sealed envelope, and election officials have time to research and confirm or deny the ballot after Election Day but before the official results announcement. Provisional and absentee ballots are generally only counted if they could possibly make a difference for any ballot position or question. The insertion of the Electoral College process provides a significant time window to deal with absentee and provisional ballots.

We have more than fifty different voting systems from multiple vendors distributed across all fifty states, plus precincts in the District of Columbia, territories like Puerto Rico and foreign locations including some embassies and military bases. Since almost all of these voting systems are not connected to the Internet, it will be very difficult for hackers to make a successful attack that can impact an election.

The last word:

This does not mean that we will have a fraud-free election, but it means that we need to continue to be vigilant for the relatively few cases of voter fraud, voter intimidation by groups or individuals, or “lost” ballot boxes. If you are sleeping too much, search for “lost ballot box” on Google.

Comments solicited.

Keep your sense of humor.


Autonomous Vehicle Update

Tesla is having an interesting year with its Autopilot capabilities.   Its software, supported by cameras, radar and ultrasonic sensors, can automatically drive on a highway including changing lanes and adjusting speed depending on surrounding traffic. It can find a parking space and parallel park itself. It even comes when called. The Summon feature allows you to call your car from your phone and have it meet you at your front door. How many times have you wished for that feature when you can’t quite remember exactly where you parked in that huge shopping center lot on a rainy cold day?

Joshua Neally is quite happy with his Model X. On his way home in Springfield, Missouri, he suddenly felt something like “a steel pole through my chest.” He let his Tesla autonomously take him more than 20 miles to the off-ramp near the closest hospital. Neally had to drive himself the final stretch, but he survived a pulmonary embiolism, which kills 50,000 people a year, 70% in the first hour.

On the other hand, Joshua Brown had a different experience with his Tesla Model S in Williston, Florida. His car’s sensor system failed to distinguish a large white 18-wheeler crossing the highway on a bright sunny day. Joshua was killed when the top of his car was torn off when it went through the trailer.

Tesla cars are not fully autonomous: they require the driver to remain alert and keep their hands on the steering wheel. Neally’s car got him safely 20 miles and off the freeway, but he had to manually drive to the emergency entrance. Brown was apparently watching a Harry Potter movie at the time of the crash; at least the movie was still playing when the car finally stopped after snapping a telephone pole a quarter mile from the accident.

Uber in Pittsburgh is offering rides in self-driving Ford Fusion cars. There is still a driver who has to be ready to take over at any time. The current software will not automatically change lanes, like when a delivery van is double-parked on a city street. The driver needs to take control to safely go around the obstacle.

Like today’s cruise control, you must engage the self-driving features, and manually braking or accelerating disables the self-driving features.

Surprisingly, bridges can be a problem for autonomous vehicles. One might think that nothing could be simpler than a bridge: a straight set of well marked lanes with a definite right side and minimal distractions like pedestrians and cross streets. But that simplicity is a big part of the problem. Without so many environmental clues like buildings, it is harder for the car to figure out exactly where it is. This is one reason Pittsburgh was chosen for this first Uber self-driving car rollout. Pittsburgh also has four seasons, an irregular grid of roads, and lots of potholes, and Carnegie Mellon University robotics center. The robotics center providing the self-driving hardware and software.

We are in the awkward learning phase with autonomous vehicles. Having someone behind the wheel to take over is problematic. It is hard enough to stay focused on driving when you are actually driving, let alone when you have very little to do.

The last word:

If you have one of the current crop of semi-autonomous vehicles, you must pay attention at all times. Tesla emits an audible chime when it detects that the driver does not have his hands on the wheel. Uber currently has a engineer behind the wheel, and another ride-along engineer in the passenger seat monitoring the car and taking notes.

If you are riding in a semi-autonomous vehicle, like the Uber cars in Pittsburgh, do not distract the driver. Treat the driver like you would in a normal vehicle.

On the other hand, Teslas had been driven more than 130 million miles in autopilot mode prior to its first fatality, compared to one fatality every 94 million miles in the US and every 60 million miles worldwide.

Always remember, these software systems are still in beta. You won’t trust your business to beta software; don’t trust your life to beta software!

Comments solicited.

Keep your sense of humor.


This is the last of a series of four blogs about quantum computing. The first was a quick view into the weird world of quantum physics, followed by a look at was capabilities a quantum computer would have. Last time we looked at the significant implications a quantum computer will have on data security.

Here are some examples of where we are today:

  • MIT has created a five-qubit quantum computer (Science, March 2016).
  • D-WaveThe Canadian company D-Wave Systems shipped its first quantum computer in 2010 with 128 qubits. D-Wave has announced the availability of the D-Wave 2X system with more than 1,000 qubits. On the other hand, there are lots of skeptics about whether what D-Wave is creating is really a quantum computer. It clearly uses some quantum capabilities, but if I understand it correctly (a big “if”), it deliberately avoids using superposition and quantum entanglement. If so, it will limit their quantum computer capabilities. However, they are way ahead of anybody else in actually building a computer based on quantum concepts.
  • The Australian company Shoal created a quantum computer for the Australian Department of Defense in 2014, and then spun off QxBranch as a quantum computing software company working closely with D-Wave.
  • California-based Rigetti Computing is developing fault-tolerant gate-based solid state quantum processors that they claim is highly scalable and low cost.

The largest prime number successfully factored by a quantum computer is 56,153 (241 x 233). At this point, the time to factor that 16-bit number with a quantum computer is longer than the time to factor it on modern classical computer. Today’s modern encryption keys have up to 768 bits.

How long will it take to have a quantum computer large enough to threaten today’s network security practices? It took 25 years from the first digital computer (Eniac, 1946) until computers were powerful enough and ubiquitous enough to create the first primitive networks (ARPANET, 1971). It took another 19 years until Tim Berners-Lee created the first web browser in 1990 and the formation of the Internet. It won’t take that long to get real quantum computers, maybe twenty years but more likely closer to ten.

The last word:

You don’t have to worry about a quantum computer cracking your network security and exposing all of your secrets. Yet. You do need to remain vigilant because sometime there will be such a quantum computer. You can bet the first such computers will be deep inside organizations like the US National Security Agency (NSA) and similar organizations in other countries.

For those of us who lived through or even participated in the space race, one of the significant differences between the US and the USSR was openness: the US did everything in public, the USSR did everything in secret and only revealed their successes after the fact. These days, the NSA acts much more like the Soviet model, keeping a tight rein on security products, and with the ability and inclination to prevent technologies from entering the marketplace until the NSA is ready.

Our first indication of the existence of a powerful quantum computer may be the successful attack on a nation’s political, military, financial or physical infrastructure.

Comments solicited.

Keep your sense of humor.


This is the third in my Quantum Computing series. Last time I indicated that the two main areas in which quantum computers will be very much faster than digital computers are searching and factoring. The average individual and almost every company will rarely need the incredible searching capabilities of a large quantum computer, and I suspect that specialized companies will be created in the next twenty years or so to handle the special cases that do come up.

But everyone should be concerned about a quantum computer’s capability to almost instantaneously factor large numbers. To understand why, we have to understand how encryption is actually done in our digital world. There are two main types of encryption: symmetric-key encryption and public-key encryption.

Symmetric-key encryption uses the same key for both encryption and decryption. Both parties must have the same key in order to communicate securely. We use symmetric-key encryption every day: whenever you see https:// (instead of just http://) in an Internet URL, you are using symmetric-key encryption. Symmetric-key encryption algorithms are subject to various attacks based on the process that generates the symmetric key, but the biggest issue is how to securely transmit the key between the two parties. That key sharing usually involves some form of public-key encryption.

Public-key encryption has two keys: a published key that anyone can use to encrypt messages and a private decryption key that only the receiver has. While the process to generate the pair of keys is mathematically amusing, the key component of the process is to multiply two very large prime numbers together. The public key is that product plus another calculated value based on the two primes that form the product. The security of public-key encryption is based on the time it takes with current digital technology to determine the two prime factors that are used to compute the public and private keys. This factoring time goes up exponentially as the key gets larger, so that today by the time some organization could break a code, the data would be of historical interest only.

However, Peter Shor at MIT has shown that a quantum computer could factor large numbers easily, meaning very quickly. Oops.

Quantum computers could end the predominance of public-key encryption algorithms, which would also seriously impact symmetric-key encryption.

The ideal cryptographic protocol is the “one-time pad,” first described in 1882. A one-time pad is a random secret key that is only used once. It was original an actual pad of paper that contained the key, or more likely a set of keys. The pads were then physically carried from one party to the other, often using clandestine methods. The KGB created one-time pads that could fit inside a walnut shell. Today, most symmetric-key algorithms create a one-time use key in real time for short-term use. For example, https security creates a new key for each communication session. If you are communicating with https to multiple sites at the same time from the same browser, each of those communications has a different symmetric key.

Quantum computing to the rescue: Quantum Key Distribution (QKD) allows for the distribution of completely random keys at a distance solving the biggest security problem with symmetric-key encryption. A key generator creates two entangled qubits (perhaps a photon), and sends one to each party. Each party looks at one attribute of the qubit (say polarity), and assigns a bit (0 or 1) based on the attribute value. Due to entanglement, both parties will get the same answer. Repeating this process can generate a symmetric key of any appropriate length, normally no larger than 256 bits.

More importantly, the parties can tell if anyone intercepted their qubit. If someone does intercept the qubit distribution, that interception will disturb the entanglement and the keys will no longer match. Problem solved.

The last word:

Perhaps one of the strangest potential uses of a quantum computer is to simulate quantum systems. This will allow scientists to understand what is really happen at the quantum level, and could perhaps lead to amazing new products in a variety of areas.

We have no idea what the quantum computer will eventually do. Howard Aiken was a pioneer computer engineer and the original conceptual designer behind the IBM Harvard Mark I computer in 1944. In 1952, he said, “Originally one thought that if there were a half dozen large computers in this country, hidden away in research laboratories, this would take care of all requirements we had throughout the country.”

Comments solicited.

Keep your sense of humor.


Last time we talked about the weird quantum universe, where particles can be in more than one state at the same time and can be entangled with another particle at great distances. What does this have to do with computers?

Since their beginning in the late 1930s, all digital computers are based on binary digits (bits), which can have a value of zero or one. Early computers might have a few thousand bits in a box the size of a refrigerator. Your smart phone probably has more than 100 billion bits in a box that fits in your pocket. Digital computers work because the computer can tell a particular bit to be a zero or a one, and it will stay in that state until it is explicitly told to change. This is a good thing, and a bad thing. It is good because we can rely on the state of that bit. It is bad because it takes time to set the value of a bit, and later to look at it to see what that state is. Because of that, there are problems that would take computers millions of years to solve.

The quantum world is a little different. Niels Bohr who won the 1922 Nobel Prize in physics for his work in quantum theory famously said, “If quantum mechanics hasn’t profoundly shocked you, you haven’t understood it yet.”

Quantum computers use quantum bits, called qubits. Qubits are very small; they must be to have quantum effects. They can be individual atoms, photons, or electrons. Unlike a digital computer, each qubit can be in multiple states simultaneously. The effect of that is that a quantum computer with a single qubit could solve two simple problems at the same time, one with two qubits could solve four simultaneously, and one with ten qubits could solve over a thousand simple problems simultaneously. This is called parallelism. A 30-qubit quantum computer would be at least a thousand times faster than today’s conventional desktop computer.

To solve very complex problems, the digital world has created computers to take advantage of problems that can be solved using parallelism techniques. Computer companies have created computers with dozens to thousands of separate processors, each doing the same function across a large array of data. IBM’s Blue Gene is one example of a massively parallel supercomputer. SETI@home (Search for Extraterrestrial Intelligence) is an example of a distributed parallelism approach, often called grid computing, where 290,000 computers around the world can work on the same problem across the Internet.

Blue Gene cost $100 million to build. SETI takes advantage of idle time on lots of computers, so it hard to put a dollar cost on it. In ten years, it managed to cover only about 20% of the celestial sphere.

One problem with a qubit is that if you look at it, it assumes a specific state and stays there; superposition disappears. This is where entanglement comes in: you can indirectly look at the value of qubit’s attribute.

Armed with a quantum computer, what could you do? In general, not what you do today on your digital mainframe, desktop, or phone. A quantum computer would be lousy at balancing a checkbook, creating a spreadsheet, or writing a book. (Unless you were trying to prove the infinite monkey theorem: a monkey sitting at a typewriter hitting the keys at random would eventually write the complete works of William Shakespeare.)

Quantum computers will be very good in two areas: search and factoring.

  • If you have a lot of data to search (like the SETI problem), a quantum computer could look at all of the data at once.
  • While it is easy for a digital computer to multiply two large numbers, even if they each have hundreds of digits. Determining the factors of a very large number, like one with 500 digits in it, is very difficult and can take thousands of years with today’s fastest computers. A quantum computer could factor a 500-digit number in seconds.

OK, a very quick search would be neat but Google is fast enough for me. And I never even want to think about factoring a 500-digit number, so I don’t care about that.

Next time we will explore why you should care.

The last word:

As you probably know, the US is far behind most of the rest of the world in cell phone and credit card technology. Almost every point of sale in Europe takes EMV cards, smart cards that store the card data in integrated circuits embedded in the cards or in RFID chip. If you are wondering what the acronym “EMV” stands for, it is ”Europay, Mastercard and VISA,” the three companies that created the standard.

There are two security concerns you should be aware of:

  1. The RFID chips in your smart credit card can be read over a distance of up to three feet. Anyone close to you for a second could be a thief stealing your card. Always carry it in an RFID shielded envelope, wallet, or purse.
  2. Because non-smart cards are a pain for the retailers in Europe, they don’t like to take them and when they do they are not very careful. They won’t bother checking the signature on the card, even if you have signed it with “CHECK ID.” Anyone who steals your physical card will have no problem using it. Your credit card company should cover your costs, but it will be a huge pain. Never carry debit cards – you have very little protection against someone emptying your bank account, and they can do it in a matter of minutes.

Comments solicited

Keep your sense of humor.


In my last post I indicated that the fifty-year trend of doubling computer processing power every two years was coming to an end, and growth with the current integrated circuit technologies is expected to become almost flat by 2018. One of the possible ways around this limit is quantum computing. Quantum computers take advantage of the strange world of quantum mechanics.

This is the first in a series of planned posts to give a brief overview of quantum physics (with no math), a discussion of quantum computers, the potential impact of them especially as it pertains to data security, and the current state of development of quantum computers.

Most of us spend nearly all of our time in a Newtonian physical environment. If we drop something, it falls, and we can predict how long it will take to fall. We can throw a baseball with, depending on our skill, a reasonable expectation that we know where it will go. We know how long it will take for something to travel between two points, subject to understandable issues like traffic jams, construction, or weather. This works whether we are trying to drive to the grocery store or land a spacecraft on Mars.

But outside of this environment, things may work differently. At really high speeds, relativity has an effect. NASA scientists were able to measure this, admittedly very small, effect on the Apollo missions to the moon: the astronauts aged a tiny fraction of a second less the rest of us stuck on earth due to the speed of their travel to and from the moon. At very small sizes, quantum effects take over, and some of these effects may seem to be just weird.

One of these weird effects is the uncertainty principle: if you measure one aspect of a quantum particle you will not be able to measure another. At the quantum level, a policeman could determine how fast you were driving, but could not then determine where you were.

Superposition is the principle that a quantum object is actually in all possible states simultaneously all the time, until something checks it. You have probably heard of the “Schrodinger’s cat” thought experiment: place a live cat in a steel box along with a sealed vial of a highly poisonous gas, a hammer, and a very small amount of radioactive material, plus a very sensitive radioactive decay sensor (e.g., Geiger counter). If the sensor detects the decay of a single radioactive atom during the test period, a relay causes the hammer to fall on the vial of gas and the cat dies. If not, the cat lives. We cannot know the state of the cat until we actually break into the box and look, and in the quantum world the cat is both dead and alive, until we observe it. If you are a cat lover, then substitute “evil squirrel” for “cat” in this paragraph. Like most thought experiments, this one is technically flawed; a cat is not a quantum entity and “looking” at one cannot change it’s living state.

Quantum entanglement may be one of the strangest concepts in the weird quantum world. If two particles are entangled, then if you measure one property of one of the particles, then the same property of the other particle is identical. Measuring the property of the one particle fixes the property for the other particle, so if it is also measured it will always be the same as the first particle. Somehow, the second particle learns the result from the first particle, and this happens instantaneously over any distance. When the particles are far enough apart, this “learning” travels faster than the speed of light. However, you do not learn anything about another property of the entangled pair. Any other property still has all of its possible values on each particle, and there is no relation of that value between the particles. For example, assume these entangled quantum particles had two properties: color (red or green) and shape (cube or sphere). Then if you measured the shape of one of the particles and found it to be a cube, then the other particle would also be a cube. However, the color of each particle could be red or green independently.

As the current computer chip technology gets faster, the individual components on the chip of necessity get smaller and closer together. At the speed our current chips operate, the speed of light is too slow. We need to have components as close to each other as possible to minimize the time it takes a signal to travel between components. At this time, components may be the size of a molecule. Chips with this level of component density face three main challenges: high defect rates, process variation, and quantum effects. The first two challenges are simply the result of the closer tolerances in the actual manufacturing phase. The industry has been pushing these limits for the past couple of decades, resulting in the continuous development of more exacting, and more expensive, manufacturing methods as well as more stringent testing processes.

Quantum effects are not as easily overcome. They can cause high leakage currents and large variability in the device’s characteristics. As transistors get smaller, quantum superposition will make it impossible to distinguish between the two states of a transistor. The real barrier to unlimited performance increases in computers using today’s technology is the reality of the quantum universe.

The last word:

The two main national political conventions / carnivals are over. The Republican convention in Cleveland had no fence around the convention center, and fewer than 25 arrests for the entire four days. The Democratic convention in Philadelphia had an eight-foot tall fence around the convention arena, raised in part to twelve feet after the first night. There were more than 50 protestors cited and removed by police during the first day alone, and those protests were literally drowned out by several severe thunderstorms with gale-force winds.

Please do not jump to any conclusions from these data points. As I have said before, apparent correlation does not imply causation. The different results may be more due to the difference between Cleveland and Philadelphia, or the weather, then the political parties.

One clear distinction that is caused by the parties is the presence, or absence, of the US flag within the convention. For the first two days of the Democratic convention, there were no US flags within the convention center, a sports arena. The Democratic committee had all of the US flags removed, including the huge one that has always hung above the center in the arena. Apparently, it interfered with the balloons.

Alas, the American voter is left with the sad choice between a clown and a criminal.

Comments solicited.

Keep your sense of humor.


In 1965, Gordon Moore predicted that computer chips would double in performance every two years at low cost, now known as Moore’s aw. He also predicted that chips they would eventually be so small and inexpensive that they could be embedded in homes, cars and what he called “personal portable communications equipment.” In 1968, he and Robert Noyce founded NM Electronics, soon renamed to Integrated Electronics and then shortened to Intel. Integrated circuits were just in their infancy at that point, with companies trying to deal with the technical issues of putting even eight transistors in a single chip.

Of course, Moore’s law is not a physical law, but, thanks in significant part to the work of Intel, it has held remarkably true for 50 years. However, earlier this year Intel announced that it will not continue to keep up with Moore’s Law.

MooresLawThis issue is part science and part finance. As a result of the shrinking the size of microscopic transistors in a modern integrated circuit, these transistors are closer together. This causes two problems: heat and quantum effects. In order to achieve the desired high performance, these packed transistors generate a lot of heat. Too much heat can literally fry a chip, making it useless. Quantum effects cause their behavior to become unpredictable, not a desired trait in the way we use computers today.

The financial problem is due to the cost to produce these new integrated circuits. Today, each machine to “stamp” out chips costs about US$50M. Each future “generation” of chips will increase the design and production cost by up to 50%, meaning a new chip factory may cost US$10B to build.

What does this mean to you and your company? Probably not much in the short term. In fact, if you are lagging a little in your technology usage in your products, this may give you a chance to catch up.  Not surprisingly, chip manufacturing companies are working on several alternative solutions to continue to drive growth in semiconductors:

  • Carry on the current path. The real obstacle is simply money. For those cases where you actually need to get maximum performance from a single small package, you will likely to be able to get it. You may not like the cost, as there will be large production costs and smaller demand.
  • New technologies including spintronics, carbon nanotubes, and quantum computing. Intel plans to move from silicon-based transistors over the next 4-5 years.

For most companies, the real solution is distribution. We see new products every day with embedded processors connected to a network. For the past decade, cars have contained dozens of computers, each assigned to one function like brakes, cruise control, entertainment systems, and even door locks. As the car manufacturers move towards full autonomous vehicles, we are seeing integration of all the computers within a car into a single network, with additional computers added for new functions. We will over the next ten years see the cars themselves integrated into a wider network including other cars and traffic signals and monitors.

As you are looking at your future product plans, consider distribution both within your product and to the outside world as a way to expand capabilities and performance and attract new customers. Always remember that the Cloud is there to help.

The last word:

I was one of 23 thought leaders recently featured in Tenfold’s “23 Thought Leaders Answer: What’s Your #1 Tip for a Successful First Meeting with a Prospect?” You might want to check it out.

Comments solicited.

Keep your sense of humor.


I feel a little lazy this week. We just got back from a very busy spring with two cruises: one from Vancouver around Hawaii and back to Vancouver on Holland America and the other from Amsterdam to Budapest on a Viking Longboat. I strongly recommend both cruises. Between the trips we attended a family wedding at the other end of the state.

But cyber attacks continue unabated. Some of the more recent “highlights:”

  • On top of the 191 million voter registration records stolen in December 2015, another 56 million records were captured and exposed, probably by a Christian right-wing organization. While a lot of information in your voter registration file is public, it does include name, address, birth date, and party affiliation. Organizations can use that information to correlate other non-public information including voting history, religious affiliation, charity donations, work place, income level, political leaning, and some really strange information like whether you like auto racing.
  • State Farm had information on 77,000 customers stolen by a hack into DAC Group, a large advertising agency in the US and Canada. While it currently seems that no financial information was stolen; it is likely that these customers had their email addresses stolen. What is instructive, however, is that this information was stolen from a development server at DAC. Security on development systems is often not as comprehensive as on a production system, and one of the reasons to have a development system is to confirm that any enhancements have not impacted data security before the software moves to the production environment. You should never use production data in a development environment. DAC should have known better.
  • A Japanese travel agency, JTB Corp, had personal information for almost 8 million people. One of JTB group companies experienced a targeted email attack, and an employee opened an attached file, which infected their server.
  • On the lighter side, the Cowboys Casino in Calgary, Canada, was attacked and personal information on less than 2,000 customers and staff were stolen. You parents told you not to gamble.

These are just a few of dozens of attacks in June 2016. If you are not having trouble sleeping, check out Norse real-time threat intelligence. This shows a small sub-set in real-time of network attacks based on their service and port. This does not include email or other application-level or OS-level attacks.

The last word:

For those of you in the United States, enjoy the Fourth of July and think about the freedoms we have here.

A number of people we met on the European cruise were from the UK, and this cruise was just before the BREXIT election. Most of them were concerned that the UK might vote to leave. From my perspective, it is past time for the UK to leave the EU. The EU bureaucrats control far too much of what each individual country and company must do, down to specifying the size and shape of wine bottles. These bureaucrats all seem to be socialists. As a result, the growth of the European economy is in last place compared to Africa, Asia, North and South America. However, the European economy is growing faster than the economy of Antarctica.

In 1992, “everyone” predicted dire consequences for the UK economy when it refused to abandon the Pound and move to the Euro. In 1990, the UK entered the European Exchange Rate Mechanism, a prerequisite for adopting the Euro. The UK spent over £6 billion pounds trying to keep its currency within the narrow limits prescribed by the EU, but, led by Prime Minister Tony Blair and his successor Gordon Brown, finally ruled out conversion to the Euro in 2007. One of the best moves in recent UK history.

Before the BREXIT vote, the UK was the fifth largest economy in the world. Do you really think a European company will cease to trade with a UK company because they are no longer in the EU?

Comments solicited.

Keep your sense of humor.


Both compressed data and encrypted data look similar: they are a string of apparently random characters that seem to bear no relationship to the original data. But there are significant differences between the intent and the process of compression and encryption.

You compress data so it is smaller, thus reducing storage space or transmission times. But since you want to easily retrieve the original data, compression algorithms are standardized and well known. Consider a ZIP file. A ZIP file can be expanded back into its original file(s) on almost any kind of computer system. In most cases, the receiving system needs no additional information than that contained within the compressed file.

Compression algorithms work by finding strings of characters that are repeated within the data, and replacing each occurrence of the string by a much shorter string. If you had, for example, a long paper about George Washington, a simple compression algorithm might replace each occurrence of “George Washington” with “\gw\” thus replacing 17 characters with just 4 each time. Compression algorithms can find lots of duplicated strings like page headers and footers, and fragments involving parts of words or numbers.

You encrypt data so that only certain people can access it. In order to decrypt the data, the receiver needs to know a secret key. Depending on the type of encryption and the length of the key, it can take the fastest computers from seconds to millions of years to brute force decrypt the data. For any scheme more complicated than a simple character substitution (replace each “A” with “x”), the encryption process eliminates the duplicated strings. “George Washington” will most likely be encrypted into different strings at each occurrence.

Therefore trying to compress encrypted data is just a waste of time, and can actually make the data bigger since there is some overhead just to define the type of compression and other parameters needed to decompress it.

Some compression algorithms support some level of encryption. For example, when you create a ZIP file you can specify an encryption key. Many of these algorithms are very weak and subject to easy attack, plus you must send the key to the receiver by some means. I watched a coworker email an encrypted ZIP file to a partner, then send a follow-up email with the password. If the receiver’s email was compromised, then the cybercriminal just received the data and the key.

Both compression and encryption can take significant processing effort on each end. Usually it takes fewer resources to decompress data than to compress it. Since stored data needs only be compressed once, when it is stored, and is often decompressed many times, this attribute is desirable.

Normally, encryption and decryption times are very close to each other on the same platform. Obviously, the actual times depend on the hardware characteristics of the platform.

You should always encrypt sensitive data, whether it is personal or financial data that is protected by regulations or laws, or proprietary information for a company or classified information for a country.

Whether you choose to compress data is a simple business decision: do you save enough money or data transmission time to justify the added cost of compressing and decompression the data.

The last word:

If you need to compress and encrypt data, first compress the data, then encrypt it. That works and you get the full benefit of the compression. However, the process introduces a vulnerability to attack the encryption.

As mentioned earlier, each compression algorithm adds a header in front of the compressed data. That header defines the compression type and a bunch of parameters and is of a fixed format. It is possible to determine the type of compression that an organization uses or accepts by simply trying different compression schemes and see which ones are accepted. It then becomes far easier to attack the encryption since you know how the clear-text message starts.

Comments solicited.

Keep your sense of humor.