The Rakuda Ranking of Japanese AI

Chart showing relative strength of models on Rakuda benchmark

Rakuda is a ranking of Japanese Large Language Models based on how well they answer a set of open-ended questions in Japanese about Japanese topics. We hope that Rakuda can help stimulate the development of open-source models that perform well in Japanese, in the spirit of English-language leaderboards like Huggingface's human_eval_llm. For a detailed explanation of how Rakuda works, please check out the accompanying blog post, and for the full code implementation check out the project on github.

In brief, we ask the AI Assistants in the ranking to answer a set of 40 open-ended questions. We then show pairs of these answers to GPT-4 and ask it to choose which model gave a better answer. Based on GPT-4's preferences, we estimate the underlying Bradley-Terry strength of each model in a Bayesian fashion. Bradley-Terry strengths are optimal versions of Elo scores.

Please contact us if you have any suggestions or requests for models that you'd like us to add to this ranking!

RankModelStrengthStronger than the next model at confidence level
1gpt-41472 ± 4997.5%
2claude-21353 ± 4289.3%
3gpt-3.51285 ± 37100.0%
4llama2-70b (StableBeluga2)1089 ± 3085.8%
5elyza-7b-fast1044 ± 2975.0%
6chatntq-7b-jpntuned1017 ± 2950.9%
7elyza-7b1016 ± 2986.8%
8rwkv-world-7b-jp-v1971 ± 2989.8%
9super-trin921 ± 2771.4%
10line-3.6b899 ± 2992.8%
11ja-stablelm-7b839 ± 2895.3%
12weblab-10b764 ± 3397.5%
13rinna-3.6b (PPO)674 ± 3666.2%
14rinna-3.6b (SFT)653 ± 36N/A

Date Updated: 2023-09-27