Facebook Pushes Its Llama 4 AI Model to the Right, Wants to Present “Both Sides”.

  • 2025-05-01 08:00:00
  • 404 Media

Current artificial intelligence systems are undoubtedly characterized by clear preferences - biases, some would say. Considering the origin of the materials to “train” these models, whether LLMs (large language models) or AI generators of visual content, it is now generally known that the results provided by these tools are conditioned by the information used in the training process.

As a result, any response obtained by an AI system cannot be unbiased; it will always be influenced by every information in the background of the process. Experts and researchers in the field have emphasized this important detail from the beginning, but it seems that it has not been particularly considered.

However, Meta recently shared its efforts to attempt to remove these “preference-biases” within the development of Llama 4, its artificial intelligence. The reason behind this sudden effort? The model tended to show a political bias toward the left.