The promise of quantum computing has been building up year after year, and yet still seems as far away from practical reality as it has ever been.
The good news is that there are finally end goals in sight, and the gap between promise and practical is narrowing. The bad news is that it is still a slow journey.
At SAS, the global analytics company that helped pioneer modern data analysis for the past 50 years, the focus is on where quantum might eventually fit into real-world decision-making.
During the company’s SAS Innovate 2026 conference in Dallas last week, Bill Wisotsky, principal quantum systems architect at SAS, spoke to Gadget about the state of the technology and the hurdles still in the way. His work spans testing new systems, working with partners and universities, and helping customers explore early use cases.
(Tech warning: The interview includes deeply technical insights, and the following section will help guide the lay reader trough the jargon).
Making sense of quantum
A few basics help cut through the terminology used in the interview.
A qubit is the quantum version of a computer bit. A classical bit is either zero or one. A qubit can be both at once. This is known as superposition.
Qubits can also become entangled, meaning the state of one depends on another. This allows quantum systems to handle many possibilities at the same time. Albert Einstein called this “spooky action at a distance”, and even e was dubious about it.
The trade-off is instability. Errors are common, and current machines cannot correct them as they run. That is why quantum systems are described as not being fault tolerant.
To address this, researchers are working on logical qubits, which combine many physical qubits to reduce errors. This is one of the biggest technical hurdles.
Another difference lies in how results are produced. Classical systems are deterministic, meaning the same input gives the same output. Quantum systems are probabilistic, meaning the same calculation can produce different results.
That requires running computations multiple times and analysing the results.
These differences shape the kinds of problems quantum is expected to tackle. One of the main areas is optimisation; finding the best outcome among many possibilities, whether that is a logistics route, a pricing model or a resource allocation problem. Another is simulation, especially in chemistry and materials science, where modelling molecules and their interactions quickly becomes too complex for classical systems.
Some of the approaches mentioned in the interview fall under quantum machine learning. This includes quantum neural networks, which attempt to apply pattern recognition techniques to quantum systems, and quantum reservoirs, where a quantum system transforms data into a form that can be analysed alongside classical methods. Both are still experimental, but they are seen as promising directions.
All of this depends on quantum algorithms; the instructions that make use of the hardware. Without them, even advanced machines have limited practical value.
Wisotsky also refers to an ansatz, which is a structured way of setting up a quantum algorithm: effectively a starting design for how a quantum system should process information.
Interview with Bill Wisotsky, principal quantum systems architect at SAS
Arthur Goldstuck
Tell me more about what you yourself are doing in quantum?
Bill Wisotsky
I’m at SAS as the principal quantum systems architect. My job varies. I work with all of our quantum partners. I test new technologies. I see which ones we should not invest money in but invest time in to start learning it and seeing how we could work with them. I work with different companies. I work with our universities, and I work with all of our customer on proofs of concept. I work on quantum algorithm development as well.
AG
Looking at the current state of quantum hardware, what do you see as the next inflection point in quantum computing that will actually run in the system, as opposed to the theoretical discussions?
BW
I think the next big thing that we’re going to see in quantum computing is going to be fault tolerance. Right now, quantum computers are not fault tolerant. There’s really no error correction going on, so you have to do post-processing or error mitigation to deal with the errors that come out. Until fault tolerance is there, there are going to be limited use cases. And with that, is going to be the size of the quantum computers, because in order to do the fault tolerance, you need logical qubits. And logical qubits are made up of numbers of physical qubits, so they’re both connected.
AG
And when you think that will happen? Is there a roadmap towards full tolerance?
BW
Most quantum computer companies are saying 2032 to 2035.
AG
That’s also the timeframe they give for when quantum becomes practical.
BW
Because they need full tolerance to make it useful in that way.
AG
There are a lot of architectural approaches (to achieving faull tolerance). Which one gives you the most confidence that they’ll actually solve this problem?
BW
I think there’s going to be multiple architectures. I don’t think there’s going to be one. You’ve got to remember that quantum is a moving target, it’s an evolving landscape. I think superconducting qubits are always going to be promising. I think that neutral atoms are always going to be promising. And trapped ions. I think that those three are really big ones.
There are some interesting technologies that that are promising. They’re just a little bit, I would say, younger, maybe not as mature. Photonics is very promising, but they have much smaller quantum computers. What they call quantum dots, or silicon spin, electron spin qubits, where thousands of qubits could be manufactured using standard CMOS technologies, are promising, but they’re still in the beginning.
I don’t think there’s going to be one, because different modalities are better at solving certain problems than others.
AG
When you think about quantum machine learning, quantum neural networks, reservoirs, quantum LLMs, they have produced mixed results. Where have you seen genuine, actual traction? Or do you think the field is overselling itself.
BW
There are two sides to that coin. One is the hype. Like you hear quantum companies come out with these things, saying, ‘We solved the problem in two hours, that would take hundreds of years’. It’s amazing, that they really did solve problems. But those problems are not necessarily applied problems. They’re very specific problems. That adds to the hype of quantum computers. And when I work with customers, they say, ‘Well, can you solve this problem?’ And no, not necessarily. It’s not the same thing.
There are certain areas of quantum that are very promising. Optimisation is one of them. We’ve seen some really good results in optimisation. Machine learning is very promising. As the quantum computers become more powerful, we’re going to see more and more benefit to machine learning. Quantum neural networks and quantum reservoirs are two algorithms that have shown a lot of promise. I really like quantum reservoirs because they fit really nicely in the way that we do things at SAS. I like the quantum neural networks. All these quantum computers and all of these different areas, whether it be optimisation, whether it be simulation for molecular models or optimisation for protein folding or machine learning, are all the big areas. But to make quantum computers useful, you need the algorithms. Without the algorithms, quantum computers are not useful, and that has been a point of contention, because there are only a couple of handfuls of quantum algorithms that can run on today’s quantum computers.
It would be equivalent to me saying, ‘Listen, I want you to build an LLM with a T test, analysis of variance and regression. It would be a very hard thing. We’re asking you to build a house with just a hammer and some nails. I think that we need to get really deep into quantum algorithm development. In some cases, like in some quantum algorithms, we’ll try lifting and shifting it into the quantum world. We’re taking that hidden node layer of a standard feed forward neural network, we’re converting it to an ansatz, or quantum ansatz, and we’re putting that in a quantum computer.
I think that we need to start thinking a little bit more outside the box for quantum. It may not be a simple lift and shift into the quantum world. We might have to redesign some of these algorithms from the ground up to work more as quantum physics would instead of data science.
AG
Talking about exactly that: in classical analytics, SAS has always focused on decisions. How does that evolve when the underlying system produces probabilistic outcomes, which is the nature of quantum, rather than deterministic ones, that your customers are going to want?
BW
It’s an interesting question. I’ll give you an example from optimisation. We found a process that was patented recently, that if we take an optimisation problem, and we run it in a quantum computer, we get a series of answers. And as you mentioned, it’s probabilistic, so we have a whole bunch of answers. If we take those answers or these solutions and then we use them as warm starts into our classical operation, now it becomes deterministic, because now we get a deterministic answer from that reality. So that’s one way. In the machine learning aspect of things, it’s a little bit more interesting because it is not deterministic, so it will vary as you run this. There are certain things that you could do to try to minimise that, but there is going to be some sort of probabilistic outcome of that.
AG
Does that relate to fault tolerance as well?
BW
Yeah, because that’s the way quantum works. You have this concept of superposition and entanglements. So you take your quantum states, and you put them into a superposition. Let’s bring it all the way down to a single qubit: It could be zero or one, or any combination, or any probability in between. So you might measure it 100 times, 50 times it’ll be zero, 50 times it would be one.
If you compound that out over time, that superposition, what gives it its power also gives us the probabilistic results. So you could do these computations in superposition, but the result of that is a probabilistic outcome. To try to minimise that, you design your algorithms using quantum operators that will use constructive and destructive interference effects to try to minimise the incorrect answers and maximize the probability of getting the correct answers. We find deviations, but not that much. It’s usually in the same ballpark.
Arthur Goldstuck
What do you see beyond cryptography and beyond optimisation and beyond complex simulations, beyond having to keep explaining Monte Carlo simulations? In other words, where do you see quantum solving problems that didn’t exist as solvable before, so it was never even proposed as something that could be solved by computers?
BW
I’m going to give you an example from when I was a kid. I had a Commodore PET computer that had a tape deck, and I put a tape in there, and I went to type in the program. Half an hour later, I would type ‘Run’, and I get ‘Syntax error line 14’, I would do the same thing again. ‘Syntax error line 42’. After about five times and an hour of my time or more, I finally got this thing to run. There was nothing wrong with the program. There was something wrong with the encoding of the tape into the computer. It was very rudimentary graphics. And I said to myself, ‘What is this going to be useful for?’ I never would have been able to see back then what the use cases were going to be for personal computers.
The same thing with cell phones. I remember the first cell phone, my friend had one, like my rich friend, right? He had a suitcase. The cell phone was in a suitcase that he carried around. He pulled the thing out of the suitcase, and they would talk on it. Like, ‘This is a useless technology. It’s mobile, but you’re carrying a suitcase. What is the purpose here?’ Nobody would be able to look at that and see the iPhone.
The same thing with rotary phones and modems. I had one of those, and it was only good for bulletin boards and writing text and you’re on the phone for like an hour until your mom is screaming at you from the kitchen, ‘Get off the phone’ because you only have one phone line. ‘So what is this going to be useful for?’ And here we are, I’m streaming videos over it.
I think that we’re at that point with quantum, but we have the hindsight of seeing where these other technologies wound up. So we’re like, ‘Oh, well, quantum could change the world.’ And I think it could change the world in various areas. I think one of the biggest areas is molecular modeling. There are things that we cannot do at all in molecular modeling, because as you increase the number of atoms, the number of electrons, the number of bonds, to simulate that on a classic computer becomes intractable. I think that’s going to be a really big area that will have one of the biggest benefits, because it will open up entirely new classes of drugs, entirely new classes of therapies.
I can’t commit. It’s like telling the future.
AG
I’ll ask you for a different kind of forecast then. Since Gen AI exploded, Alan Turing finally became a household name beyond the computing or techie or engineering community. At what point does Max Planck become a household name in the mainstream? And when do they make a movie about him? What will it take from quantum, for Max Planck to become this legend?
BW
It’s going be a number of people. It was him, it was Richard Feynman, it was David Deutsch. It was a whole group of people. There was this black and white photo that was recently made colour that has all of them, Max Planck, Paul Dirac, Albert Einstein, all in his one picture. And they were probably the greatest physicists of all time. I don’t know if there’s going to be one single name, and I don’t know if there’s going to be a point.
What made AI mainstream? It was that use case of the ChatGPT moment, the LLMs Now, my kids know of AI. Everybody knows of AI. My son could be getting through college based on just taking assignments and putting them into AI and getting the results. When we get to a point, an inflection point, where we solve a really important problem that can’t be solved now, I think that will be that point where these are household names.
AG
In five years, what would you want to be able to point out and say that’s the moment quantum stopped being a research story most of the time, and became a SAS product story? What would you like to be your contribution?
BW
I told you so. I want to be able to say, I told you so. You can think of SAS as a big toolbox, and we have a lot of different tools to solve a lot of different problems. I would love to be able to use quantum as another tool in our toolbox, but not just in the quantum lab. Across the entire ecosystem of SAS, where you have marketers that might have a really complex marketing optimisation model that they want to use, and they could run it in quantum or in model studio. You could just drag and drop quantum models in, and that would farm it out to a quantum computer and bring them back the results. That’s what I would love to see.
You’ll send something in SAS, and it’ll run on a CPU, it might run on a GPU. You don’t really know, and nor do you really care. You just want the results back in a quick amount of time. Well, we see the QPU as being something similar. So you could imagine you have an optimisation problem, that could be a hybrid problem, and it could be a very highly complex problem with a lot of complex relationships amongst the variables, and that would just be sent over the quantum computer. Results come back, and they get further processed classically, and you don’t know what went were. I would love to see that as the end game.
* Arthur Goldstuck is CEO of World Wide Worx, editor-in-chief of Gadget.co.za, and author of “The Hitchhiker’s Guide to AI – The African Edge”.
