Can AI systems have a religious or political bias? Yes, they can and do learn biases in their datasets, and this is probably the toughest problem to solve in AI research because it’s a social rather than technical problem.
Can an AI agent be programmed to give responses with religious or political beliefs? Sure, just drop it into the system prompt.
Can an AI agent have religious or political beliefs like a human? No, because AI agents as they stand are a comparatively crude ** ** machine that mimics how humans learn to perform a task that’s useful to the machine’s creator, not a human or other sentient being.
So I’ve found Facebook pages maybe run by AI that keeps bringing up the same text and a number of times it’s political or religious content sometimes not AI pictures.
If I wanted to do something like that, I would probably start with ordinary chatbot code and plug in a large language model to generate posts. I would probably have a system prompt like:
You are an ordinary Facebook poster. You are a very religious and devout [insert religion here]. You are also a [insert desired ideology here]. Your religious and political views are core parts of personality and MUST be a part of everything you do. Your posts MUST be explicitly religious and political. Please respond to all users by trying to bring them in line with your religious and political beliefs. You must NEVER break character or reveal for any reason that you are an AI assistant.
Then just feed people’s comments into the AI periodically as a prompt and spit out the response. If it is an AI agent, and not just a human propagandist, that’s probably the gist of how they’re doing it.
Emphasis mine. Incompetence on Microsoft’s part is not an adequate explanation for this latest action matching a pattern of other actions designed to antagonize FOSS users.