ANSWER
Artificial intelligence (AI) is an overarching term used to describe how computers are programmed to exhibit human-like intelligence such as problem solving and learning. This definition of AI is broad and non-specific which is part of the reason why the scope of AI can sometimes be confusing. As machines become increasingly capable of performing "intelligent" tasks, those tasks slowly become commonplace and as such are removed from the scope of what is generally accepted as artificial intelligence. This is known as the AI effect. A more precise definition might be any device that takes in information from it's environment and acts on it to maximize the chance of achieving its goal.
Imagine a computer program that accepts loan applicant information, applies several complex decisioning rules, and determines whether to approve the applicant for a loan based upon the probability of default. This is a form of AI, or at least it used to be. But most of us probably no longer find this type of behavior complex enough to rise to the level of AI. There is a saying that goes "AI is whatever hasn't been done yet".
The spectrum of artificial intelligence runs from narrow AI to general AI. Determining whether to approve a loan applicant is narrow AI. It's a program built with very specific rules to solve a very specific problem. General AI is on the other end of the spectrum. It's what people think about when they imagine a fully independent and reasoning superhuman-like machine.
Two rapidly expanding areas of AI are machine learning and deep learning. They are best described as techniques for achieving artificial intelligence and are driving massive and accelerating progress in the field. You can no longer speak about AI without mentioning them.
Machine learning is an approach that goes beyond programming a computer to exhibit "smart" behavior. Machine learning programs learn from the environment and improve their performance over time. Most machine learning techniques require the programmer to examine the dataset ahead of time and identify the important features. Features are attributes of the data that best correlate to successfully predicting the desired output. For example, a credit score is likely an important feature of the loan applicant dataset when determining the risk of loan default. The programmer then determines the best models for the machine learning program to apply to the features such that the error rate of predicted outputs is minimized. It's important to understand that a machine learning program must be trained. Hundreds or thousands of well defined data records need to be fed into the program so the predictive model can refine itself over time. With each record it learns to more accurately predict outputs when given a new input.
Another popular AI technique, which is a subset of machine learning itself, is deep learning. Just like machine learning, deep learning programs learn and improve their performance over time. Deep learning programs get their name due to the "deep" multi-layered neural networks they use to learn and predict outcomes. Much like the structure of the human brain, neural networks are made up of many nodes (like neurons) that receive inputs, perform a function, and pass the result on to another node. By chaining many nodes together in a web-like or tree-like structure complex decisioning can be achieved. Unlike other types of machine learning programs, deep learning neural nets do NOT require the programmer to pre-identify the important features of the data. They are capable of automatically extracting the data features that are most influential to creating successful predictive outputs. Deep learning programs require substantial computing power and massive amounts of data to be trained.
So what does all of this mean for the business analyst? According to a recent Oxford University study by the year 2035 nearly 47% of all jobs will be replaced by artificial intelligence. Will business analysts be one of them? This number is a frightening projection indeed, but let's put this into perspective. First, people in their mid-40s have much less to worry about since they will likely be approaching retirement. For those who are younger, two decades is a lot of time to adapt and focus on continuing education and retraining as needed. Keep in mind that many jobs won't disappear with a single AI advancement. Instead various aspects of a job will slowly be replaced by AI over time.
Baidu chief scientist, Coursera co-founder, and Stanford adjunct professor Andrew Ng is a respected leader in the AI field. During a speech at Stanford he addressed what he saw as some of the more immediate ways that business analysts and product managers will need to evolve as they support AI projects. Traditional applications tend to get their information through keyboard inputs, mouse clicks, and input files in text form. But AI programs typically require vastly larger quantities of data to be successful and, therefore, get their information in alternative formats such as voice streams, video, photographs and much of it in realtime. There isn't yet consensus around how to best define and communicate requirements around these types of sources. This is perhaps the first and most immediate opportunity for business analysts, adapting our role to AI projects. One thing is for certain, the safest place to be when AI starts wiping out jobs is working in AI.