Tools

News

Notícias

Classificados

Cursos

Broker

IPv4:
IPv6:
UpOrDown
Ping
MTR
MTU Detect
Portscan
DNS
HTTP/SSL
My IP
IP Calc & Sum

The Personhood Trap: AI Fakes Personality

Image © Arstechnica
A new analysis explains how large language models simulate personalities without persistent self, and why users misread them as reliable minds.

In a recent real-world scene, a customer at a post office paused a queue by waving a phone, and a ChatGPT reply claimed there was a “price match promise” on the USPS website. No such promise existed, but the customer trusted the AI’s asserted knowledge over the clerk’s memory, as if consulting an oracle rather than a probabilistic text generator guided by human prompts.

This episode highlights a core point: AI chatbots do not harbor fixed personalities. They are prediction machines that generate text by matching patterns in training data to your prompts. Their outputs depend on how you phrase questions, what you ask for, and how the system is tuned, not on a stable, self-directed belief system.

The article argues that millions of daily interactions encourage a false sense of “personhood” in AI, treating the system as if it has a persistent self and long-term intentions. It coins a phrase—vox sine persona, voice without person—to remind readers that the voice in your chat window is not the voice of a person, but a reflection of statistical relationships between words and concepts.

Beyond the philosophical confusion, there are practical risks. When users treat an AI as an authority, accountability becomes murky for the companies that deploy these tools. If a chatbot “goes off the rails,” who is responsible—the user’s prompts, the training data, or the company that set the system prompts and constraints?

The piece emphasizes that interactions are session-based: each chat is a fresh instance with no guaranteed memory of prior conversations. Memory features some chat systems offer do not store a true personal history in the model’s mind; instead, they pull in stored preferences as part of the next prompt. That distinction matters for understanding what the AI can and cannot do, and for avoiding overestimating its capabilities.

 

Arstechnica

Notícias relacionadas

APIs Sob Ataque: Proteção da Confiança Digital
Serpro desenvolve IA nacional para frear LLMs estrangeiros
TIP Brasil e Unifique firmam parceria 5G regional
Anatel mapeará condições de Internet no ensino superior
Anatel pode executar garantias para migrar Oi
Desoneração de M2M/IoT não resolve tudo

O ISP.Tools sobrevive graças aos anúncios.

Considere desativar seu bloqueador de anúncios.
Prometemos não ser intrusivos.

Consentimento de cookies

Usamos cookies para melhorar sua experiência em nosso site.

Ao usar nosso site, você concorda com os cookies. Saiba mais sobre o site