Posts

Original article published at The Market Mogul May 8, 2018.

“Artificial Intelligence (AI) is the solution. What is the problem?” This conversation is representative of the pervasiveness of AI. Unnoticed, an AI revolution is upon us. Something has changed in last half decade to make this happen.

Student Becomes Master

Cat, the lion’s mentor in Indian folklore, had hardly finished teaching her student how to hunt when the lion pounced on her. She escaped by climbing a nearby tree, a trick she had not taught the lion. Like the popularity of this creation-creator story, events of AI matching or exceeding human performance are making big news. Last year AI beat top humans in poker, a strategic thinking game, where both the cards and bluffing matter. A similar feat was achieved in 2016 when a computer dethroned Lee Sedol, the world champion of go, a complex Chinese board game. Besides these shocking successes over humans in strategy and board games, there has been a significant improvement in the technical performance metrics of the machines. This is leading to breakthroughs in autonomous vehicles, visual recognition, natural language processing, lip reading, and many other fields, signalling the development of algorithms with skills superior to humans.

In the annual ImageNet Large Scale Visual Recognition Challenge (ILSVRC), machines showcased dramatic progress, improving from an image recognition error rate of 28% in 2010 to less than 3% in 2016. The best human performance is about 5%. Deploying a new computational technique (Deep Convolutional Neural Net) in 2012 brought this marked improvement in image processing and set off an industry-wide boom. Similarly, both Microsoft and IBM increased the accuracy of ‘Speech Recognition on Switchboard’ from 85% in 2011 to 95% in 2017, equalling human performance. Advances in machine learning, pattern recognition, and a host of other technologies will enable autonomous cars without human drivers to drive on roads this year. Alphabet’s Waymo leads the pack of several self-driving car makers in the race.

Classical Versus Deep Learning AIs

The biological brain consists of two different but complementary cognitive systems, the rational (model based, programmed) and the intuitive (model-free, learning through experience). The understanding of how the mind selects between these modes and the interface between intuition and rational thought is beyond the current technology. In the classical approach, AI, and its subsets Machine Learning and Neural Networks, followed the rule-based (if-then) systems imitating the decision-making process of experts ignoring the intuitive part. For complex problems, it was difficult to encode rules and make available the requisite knowledge base. For almost sixty years classical AI had been languishing as human cognition is not based only on logic.

A new approach, based on continuous mathematics, was developed. In deep learning, the new name given to artificial neural networks, the classical logic-based method has been passed over in favour of experimental computation using continuous mathematics. This change has been possible due to the growing infrastructure of ubiquitous connectivity, exponentially increasing computing power, powerful algorithms, and big data sets.

Artificial neural networks, which mimic the biological brain, are used for mathematical modelling. The human brain is imitated using electronically simulated interconnected neurons stacked on the top of each other in layers. The hidden layers perform mathematical computations on inputs. Iterating through the data sets, the output gets generated via ‘back propagation’, using a technique called ‘gradient descent’, which changes the parameters to improve the model. Maybe stumbling on nature’s design, the process works very well perhaps because of imitating intuition as in the human brain.

Astonishing Results and Possible Futures

The results are so astounding that these cannot be explained even by creators of AI programmes. AI systems are like black boxes taking in questions on one side (“Should this autonomous vehicle accelerate or apply breaks on this yellow light? “What is the next move in this board or strategy game? “What are the objects in this image”) and giving out answers on the other side. It will be difficult to explain how the black box works, but it does work. In some situations, it will be difficult to use such an unpredictable, inscrutable, and unexplainable system. This method, using neural networks inspired by the human brain, requires tons of quality data to be useful compared to the very little data needed by humans. Another flaw in deep learning is its inflexibility in using experience learned in one case to help solve another. Humans can learn abstract concepts and apply them in different applications.

The algorithmic, or specialised, intelligence, known as narrow artificial intelligence (NAI), has existed for years and has now got some teeth. It is benefiting humankind in many ways but is also capable of causing large-scale damage to an increasingly digital and interconnected infrastructure. General artificial intelligence (GAI) is a general purpose human-level intelligence and is perhaps a decade away. While GAI can meet challenges such as climate change, disease, and other problems which humans are not able to solve, it will cause economic and cultural upheaval. GAI, undergoing recursive self-improvement, will give rise to a superintelligence, the third flavour of AI. Superintelligent AIs will radically outperform humans in every field and may pose an existential threat to humans. They could appear in the next few decades, or not at all.

Human intelligence is to be understood first before it can be created. With 20,000 AI research papers being published annually, enrolment in AI programmes and investment metrics rising northwards, humanity may be close to accidentally creating a general artificial intelligence.

Here at Komaya, we love keeping abreast of emerging technology and ways that it might help enhance the web presence of our client’s brands.

We live in an incredible age where scientific advancement is progressing with leaps and bounds heretofore unseen in the history of humanity. As new technologies are developed daily, old ones are falling by the wayside. All of this change amounts to a very real and influential force in the way that individual companies and even entire industries do business. The key is to know what the march of technological innovation means for your business. Here are some helpful ways to think through interacting with emerging technology as an entrepreneur.

Know What’s Out There

If you are a really busy business owner, it can be easy to be so focused on what you’re doing (which is, after all, working just fine) that you don’t pay attention to ways that new technology can improve your workflow or business model. Look up from your own business every once in a while, and pay attention to what’s going on in the industry around you. If your industry has a trade magazine (or a few), subscribe to them. Follow leading bloggers who are active in your field. Coming across an idea in one of these forums can plant a seed in your head that you can at least begin to consider as you think about how to improve your business.

Don’t Get Tunnel Vision

While focusing on your industry is a good start, it can be useful to pay attention to emerging technology in other business that only relate loosely (if at all) to yours. How many times have you heard something called “the Uber of” this or “the Netflix of” that?  Some of the biggest entrepreneurial breakthroughs have occurred when one bold soul looked at an idea in a different industry and said, “Hey, I bet that would work in my space too!”

Use New Tech Efficiently, not Obsessively

It can be easy to become consumed with every new technology that has anything resembling even a tangential connection to your industry. Energy spent trying to keep up with the latest trends, is energy that could be spent growing your business in other ways. That’s not to say that seeking out new technological advances that could help your business is always bad. You just need to be selective and intentional with the attention you pay to the subject, and precise and calculated with how you implement new ideas.

The Early Bird Gets the Worm, but the Second Mouse Gets the Cheese

Being the first of your competitors to embrace a new technological breakthrough can give you a competitive advantage, but that advantage doesn’t come without any risk. Think back to the days of the Blu-ray vs HD-DVD battle. HD-DVD actually hit the market first. If you had thrown all of your eggs into that basket, you’d be pretty disappointed in the way that battle turned out, with Blu-ray winning out in the long run. Every technological advancement has a chance of failing, and at the blinding pace of innovation and corresponding obsolescence, has a chance to be surpassed just as quickly as it appeared.

By keeping your finger on the pulse of technology that relates to your industry AND knowing how to use it effectively, you can separate yourself from the competition.