Compete in the world's largest AI Red-Teaming competition!

Check it out →

Reinforcement Learning from Human Feedback (RLHF)

Last updated on November 12, 2024

Reinforcement Learning from Human Feedback is a method for fine tuning LLMs according to human preference data.


© 2025 Learn Prompting. All rights reserved.