where does the new results put the field and what avenues need more focus and what others need less focus to the point of complete cutoff?
where does the new results put the field and what avenues need more focus and what others need less focus to the point of complete cutoff?
1 comments
They fine tuned it for that test.
What we are seeing is a marketing trick to keep markets and investors excited about AI. It's a trillion dollar industry for NVidia and other players. Fake it until you make it.
If you look deeper, there's very little change since GPT-3.5 and Anthropic have catched up with everything OpenAI has built so far.
Sora was a huge fluke with other companies clearly ahead of it. Also mostly useless.
The numbers don't add up.
Finetuning has been looked down upon because all it does is rearrange weight to learn style of the finetuning dataset. It does not teach the model anything which is in contrast to the hopes behind finetuning
If a model was able to ace the arc-test just by the merit of being finetuned, does it not imply there is something of absolute substance here? i.e the model is capable of meta-learning and all it needs to adapt to a new-task is a bit of finetuning which again I emphasize is the loweest tier in the ranks of types of training models