I take it you don’t understand how startups work?
OpenAI is not making any profit and is losing money hand over fist today. Valuation and raising investment rounds isn’t profit.
I take it you don’t understand how startups work?
OpenAI is not making any profit and is losing money hand over fist today. Valuation and raising investment rounds isn’t profit.
Eh? That article says nothing about their profit margins. Today they have something like $3.5B in ARR (not really, that’s annualized from their latest peak, in Feb they had like $2B ARR). Meanwhile they have operating costs over $7B. Meaning they are losing money hand over fist and not making a profit.
I’m not suggesting anything else, just that they are not profitable and personally I don’t see a road to profitability beyond subsidizing themselves with investment.
OpenAI is burning billions of dollars not making profit.
I live upon morsels you happen to drop
This regulation (and similar being proposed in California) would not be applied retroactively.
All my own OSS stuff I always release MIT licensed because I want to be able to use the libraries in my closed source job.
https://arewereorganizedyet.com/ lol already updated
I even read this aloud in my head as “CREATE ZE VUCKING FILE” in a particularly bad German accent same as over 20 years ago when a friend I worked for drilled it in my head.
https://github.com/Mozilla-Ocho/Memory-Cache is the actual project if you want to use it.
Basically it’s a firefox extension to save a page as a pdf in a directory that is symlinked to your local PrivateGPT install which then ingests the docs. It doesn’t seem to me that it provides any in-browser querying of PrivateGPT but I haven’t tried setting it up to confirm that.
I think that is overly simplistic. Embeddings used for LLMs do definitely include a concept of what things mean and the relationship of things to other things.
E.g., compare the embeddings of Paris, Athens, and London to other cities and they will have small cosine distance between them. Compare France, Greece, and England and same. Then very interestingly, look at Paris - France, Athens - Greece, London - England and you’ll find the resulting vectors all align (fundamentally the vector operation seems to account for the relationship “is the capital of”). Then go a step further, compare those vector to Paris - US, Athens - US, London - Canada. You’ll see the previous set are not aligned with these nearly as much but these are aligned with each other (relationship being something like “is a smaller city in this countrry, named after a famous city in some other country”)
The way attention works there is a whole bunch of semantic meaning baked into embeddings, and by comparing embeddings you can get to pragmatic meaning as well.
I agree. Family of 5 many hotels require us get 2 rooms. Plus no option to cook meals makes for a much more expensive stay usually. At least until a few years ago when airbnb went insane with the cleaning fees plus cleaning requirements and all that nonsense.
That seems more like an argument for free higher education rather than restricting what corpuses a deep learning model can train on