I have read several articles and reports in the past couple of days about the upcoming dangers of AI. One of those dangers comes in manipulating public views, particularly manipulating political views. This morning I find that we have already crossed over into that territory with an experimental AI model. Meet RightWingGPT.
While I should have expected AI models intended to manipulate the brain, I am disappointed to see this. It seems to me that AI should be neutral in everything, especially politics. It should report "Just the facts ma'am". This stuff is going to become even more dangerous than I previously imagined. It is seemingly limitless in the harm it could cause.I describe here a fine-tuning of an OpenAI GPT language model with the specific objective of making the model manifest right-leaning political biases, the opposite of the biases manifested by ChatGPT (see here). Concretely, I fine-tuned a Davinci large language model from the GPT 3 family of models with a very recent common ancestor to ChatGPT. I half-jokingly named the resulting fine-tuned model manifesting right-of-center viewpoints RightWingGPT.