From the course: AI Product Security: Testing, Validation, and Maintenance
Introduction to AI security
From the course: AI Product Security: Testing, Validation, and Maintenance
Introduction to AI security
- [Instructor] While the initial deployments of AI chatbots and AI-enabled applications have made a big impact, they haven't been a total success. We've rapidly learned about some of the shortcomings in AI models, such as toxicity and hallucination, as well as some of the vulnerabilities to which they can fall prey, such as prompt and code injections. Before we get into these, let's take a look at what an AI model is in simple terms. At the heart of the AI model is a series of what are called vectors, multi-dimensional arrays of numbers containing what in essence are probabilities. AI models work through numbers, so when we talk to an AI model, it converts our words into numbers, tokens in AI terminology, and it then ingests them. This representation of our inputs, called a prompt in AI terminology, then passes through the vectors using some form of algorithm, and a set of tokens is output. These tokens are turned back to words, and we have our response. If we want to dig deeper into the model, we can look at the actual construction of an AI model. This involves a series of components that process our token string, starting with the process of token embedding, which turns the token number into a set of probabilities across a vector. There's a number of additional processing steps through the layers of the model until we come to the feed forward, which outputs the token string. We won't go into the data science behind this, but if you're interested in learning about the actual maths of the AI model, do take a look at the other courses you'll find in the AI and data science parts of the LinkedIn Learning Library. Another key question with an AI model is how did those vectors get in there? The answer is that the model was trained. Let's consider a language model. We take a large amount of text, known as a training dataset, break it down into smaller pieces, such as a page. These are called batches in AI terminology. These are then tokenized and embedded into the model using a training algorithm, which increasingly refines the default set of probabilities in the vectors. In practice, we need to have two additional sets of data, which we can take from the dataset. One is for testing as we're developing our application, and the second is for validation to confirm that the model is delivering acceptable results when used. AI models don't only work on text, they can work on audio and images too. The way they work is pretty much the same, although the token may have multiple components, numbers representing red, green, and blue for a pixel in the case of an image and frequency, pitch, and timbre in the case of audio. Of course, there are many different tokenization techniques, each with its own form of token, so we might ask, where does security fit into this process? Well, if we look at the lifecycle of an AI model, we can see that it starts with understanding the problem and then determining where to find the dataset to enable us to build the model. This is followed by encoding the dataset into the most suitable form for training and then using it to train the model. At this stage, we'll need to secure the dataset as any malicious changes made to the dataset will affect how the model works or if it works at all. Once the model's been trained, it can then be validated to ensure that it works properly and then deployed. At this point, we need to be concerned about securing the validation data and the model itself. Finally, we need to audit the operation of the model and if required, update the dataset and retrain it. Sadly, we've seen plenty of incidents involving AI models. Here, an initial deployment of a chatbot called Tay by Microsoft caused a furore when it started producing toxic content. In another incident, the chatbot used by Canadian Airlines gave a passenger wrong advice, and the airline was made to stand by what the chatbot advised. In another incident, lawyers used the chatbot to generate precedents to take to court, but the opposing counsel identified them as hallucinated content. If we're to be able to gain the value that's promised by AI, we need to have safe and responsible AI models to use, and that's what we'll cover in this course.
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
Introduction to AI security4m 38s
-
(Locked)
Security testing for AI applications3m 59s
-
(Locked)
Setting up a testing lab7m 10s
-
(Locked)
Introduction to HuggingFace5m 11s
-
(Locked)
Managing local models with ollama1m 49s
-
(Locked)
Test case management with KiwiTCMS1m 49s
-
(Locked)
Security testing with KiwiTCMS8m 33s
-
(Locked)
Understanding AI threats6m 26s
-
(Locked)
Testing requirements in AI standards2m 55s
-
-
-
-
-
-
-