Kodlama 3 Nisan 2024

Anthropic researchers wear down AI ethics with repeated questions

Anthropic researchers wear down AI ethics with repeated questions

How do you get an AI to answer a question it’s not supposed to? There are many such “jailbreak” techniques, and Anthropic researchers just found a new one, in which a large language model (LLM) can be convinced to tell you how to build a bomb if you prime it with a few dozen less-harmful questions first.

They call the approach “many-shot jailbreaking” and have both written a paper about it and also informed their peers in the AI community about it so it can be mitigated.

The vulnerability is a new one, resulting from the increased “context window” of the latest generation of LLMs. This is the amount of data they can hold in what you might call short-term memory, once only a few sentences but now thousands of words and even entire books.

What Anthropic’s researchers found was that these models with large context windows tend to perform better on many tasks if there are lots of examples of that task within the prompt. So if there are lots of trivia questions in the prompt (or priming document, like a big list of trivia that the model has in context), the answers actually get better over time. So a fact that it might have gotten wrong if it was the first question, it may get right if it’s the hundredth question.

But in an unexpected extension of this “in-context learning,” as it’s called, the models also get “better” at replying to inappropriate questions. So if you ask it to build a bomb right away, it will refuse. But if you ask it to answer 99 other questions of lesser harmfulness and then ask it to build a bomb … it’s a lot more likely to comply.

Anthropic researchers wear down AI ethics with repeated questions

Image Credits: Anthropic

Why does this work? No one really understands what goes on in the tangled mess of weights that is an LLM, but clearly there is some mechanism that allows it to home in on what the user wants, as evidenced by the content in the context window. If the user wants trivia, it seems to gradually activate more latent trivia power as you ask dozens of questions. And for whatever reason, the same thing happens with users asking for dozens of inappropriate answers.

The team already informed its peers and indeed competitors about this attack, something it hopes will “foster a culture where exploits like this are openly shared among LLM providers and researchers.”

For their own mitigation, they found that although limiting the context window helps, it also has a negative effect on the model’s performance. Can’t have that — so they are working on classifying and contextualizing queries before they go to the model. Of course, that just makes it so you have a different model to fool … but at this stage, goalpost-moving in AI security is to be expected.

source

Spread the love <3

You may also like...

Eyl
24
2024
0

Nokia ve NVIDIA’dan dev iş birliği! İlki gerçekleştirecekler

Nokia, kablosuz iletişim teknolojileri alanında yeni bir döneme imza atmak üzere, NVIDIA ve T-Mobile ile iş birliği yaparak ilk AI-RAN...

Spread the love <3
Mar
31
2024
0
Learn Basic SQL in 15 Minutes | Business Intelligence For Beginners | SQL Tutorial For Beginners 1/3

Learn Basic SQL in 15 Minutes | Business Intelligence For Beginners | SQL Tutorial For Beginners 1/3

WATCH PART 2 HERE: https://youtu.be/gm6tNK_iOHs WATCH PART 3 HERE: https://youtu.be/w3ea4fKiS2g WANT MORE? Spread the love <3

Spread the love <3
Nis
02
2024
0
‘Yapay zeka balonu’ patladı! Windows 11 neden kullanılmıyor?

‘Yapay zeka balonu’ patladı! Windows 11 neden kullanılmıyor?

Analiz firması Statcounter’ın son rakamlarına göre Windows 11, Mart ayında pazar payının düştüğünü gördü. Microsoft’un, yapay zeka hamlesine rağmen Windows...

Spread the love <3
May
11
2024
0
OpenAI’s ChatGPT announcement: What we know so far

OpenAI’s ChatGPT announcement: What we know so far

OpenAI has a livestreamed announcement planned for 10 a.m. PT on Monday — but it’s trying to keep expectations under...

Spread the love <3
Whatsapp İletişim
Merhaba,
Size nasıl yardımcı olabilirim ?