I assume they all crib from the same training sets, but surely one of the billion dollar companies behind them can make their own?
I assume they all crib from the same training sets, but surely one of the billion dollar companies behind them can make their own?
Yeah, they really need to start building RAG supported models. That way they can actually show where they’re getting their data, and even pay the sources fairly. Imagine a RAG or MCP server connecting to Wikipedia, one to encyclopedia.com, and one to stack overflow.