Hello Friends,
Let me share this repository with you:
Fully open reproduction of DeepSeek R1
https://github.com/huggingface/open-r1
Perhaps, you can make something new out of it.
Hello Friends,
Let me share this repository with you:
Fully open reproduction of DeepSeek R1
https://github.com/huggingface/open-r1
Perhaps, you can make something new out of it.
While I think that DeepSeek will have an impact, it's going to be on par with other advancements in the field that already were great on their own (for example: Llama), I agree with the sentiment that it's not going to be as dramatic as people are making it to be. It's absolutely astounding work, for sure, but it's not "David defeating Goliath". Sadly every new thing is reported as an Earth-shattering event with world-changing consequences that will absolutely change life as we know it. That's always a lie, and that's what gets the panic going and people (and markets) overreacting.
And talking about blowing things out of proportion, I would also love if companies would start being more realistic with how they present the models. They are good as text manipulation tools that seem to hold some amount of knowledge and concept abstractions. They are good at extracting core ideas, commands or actions from natural language, executing on them, and then reporting back in somewhat-natural language, so they are great for personal assistants and text-processing aids. But that's not how they "sell" them. They advertise them as if they were a cosmic all-knowing thing that will "boost" productivity and reduce the workload of people everywhere, and that's absolutely not even near to being a reality.
Now, back to DeepSeek. For models to be profitable and actually useful without becoming a major disaster both financially and environmentally, they need to start using less resources, and that's exactly where DeepSeek presents a move in the right direction. The fact that people can download it from a repo (thanks for sharing one that seems to streamline some of the process) and run it without requiring too expensive hardware, will hopefully mean that more and more people will be able to play with this kind of tool and maybe find a better use for it than replacing customer support on their products and then finding out in court that it wasn't a great idea.
Only if you believe all the PR hype from China.
Open AI say that Deep Seek works by stealing their data base and distilling it.
Of course other say that Open A1 stole copyright material to make their data base.
If you look at the thread "ChatGPT designs an Audio Amplifier" you might reasonably wonder why anybody cares !
MK
Agree. AI is the biggest bubble around at the moment, akin to Blockchain in 2017/2018. It doesn't matter how creators dress it up, it's all predicated around look-up tables/data and a bounded algorithm. Hence why the results are often rubbish and if not rubbish, no better than a human could have discovered. What it has going for it is speed and, within its bounded algorithm, an ability to pattern match. This isn't "intelligence", or even close, that is being promoted/marketed.
From what I understand NVIDIA stock is rebounding (albeit slowly), which kinda show it was more of a panic response to an overblown piece of news. It still shows that they are not as essential as everyone thought they were, though.
DeepSeek's work definitely shakes NVIDIA's and OpenAI's ground, but to me the value of their work lies in reducing the resources needed to run a somewhat competent model, and also showing that even if you attempt to gatekeep AI from other companies or research labs, or hardware vendors, you just can't.
Now Andrew J and michaelkellett , these models are definitely over-hyped. Their capabilities are very limited, and instead of trying to create reasonable expectations, companies working on AI products try to sell them as superhuman tools.
What I think is valuable about LLMs is their ability to understand natural language and generate a response. The idea of using that framework as a "do-it-all" tool that can do everything from summarizing text to writing music or designing stuff makes no sense, but it's unfortunately the way it's being marketed. It's the biggest bag of chips; half of the content is just air.
My understanding (as it will probably be a while before there is a general consensus in terms of what Deepseek did and didn't do) is that they have pushed some of the processing to the query side (i.e. more load on the inference) and split their database up into multiple specialties so there's exponentially smaller amounts of processing required for each area (e.g., stuff about music is separated from stuff about electronics).
This is not a new approach. "Case Based Reasoning" has been used since at least the 80s to help deal with a similar problem in deductive and other kinds of expert reasoning systems. Of course, at that point since all machine knowledge had to be hand generated and tuned, it was an expensive and laborious process but did allow for actual symbolic reasoning over curated statements of "fact" rather than simply parroting stuff someone posted on the web somewhere. So far, we really don't have a better way of representing, e.g., causality than such human mediated curation.
The minus of CBR is that parallels between fields are often missed, which means opportunities for deriving new knowledge automatically require much more complex support mechanisms. While none of the LLMs I'm familiar with are able to derive new knowledge, Deepseek's approach may well prevent the kinds of knowledge discovery needed to approach more interesting applications.
Time will tell, but I'm not dumping my chip stocks anytime soon. If Deepseek's approach does scale and is easier to use, that just broadens the application of LLM type technologies to more customers. And that may well mean a 1000x market growth will still be 50x more chips with the 20x reduction that you cite. (No, I'm not holding my breath for 1000x market growth, there's still a lot of work to do before we really get to the kinds of systems needed to support pervasive intelligence beyond simple pattern matching, but even pattern matching has it's place in how the brain works).
akin to Blockchain in 2017/2018
Hey, this may be one of the most actively used recent hypes.
As a technology it has a place, but it has very specific and limited use cases that it is useful for. Around that time, I was involved in looking at the technology and how customers could take advantage of it and what was being promoted was mostly just nonsense: using Blockchain just because it’s Blockchain. The greatest indicator was one company who changed their name, not business model, to include Blockchain and their share price jumped 350% overnight. This is pretty much where we stand with “AI” technology.
As a personal assistant collecting web information, these AI models can replace certain human force.
For instance, if you hire a secretary and ask him/her to collect certain information on open web, these AI models could perform better. Other than these, AI service cannot replace an human secretary.
As to creative art, AI model can do very little.
Sure but there's nothing particularly intelligent about it. That's just collecting and collating information under some given rules. I expect they might perform better only in the sense of (a) doing that task way more quickly than a human; and (b) being able to access and filter more data in a given time period than a human. The best I'll say about it is that the algorithm underpinning it is clever and the natural language processing has become useful because computing power is significantly better than it was 40+ years ago when NLP started being investigated.
Eliza started to be written in '64. That's 60+ years ago!
Eliza started to be written in '64. That's 60+ years ago!
It was, and obviously very basic given the time period. It's crazy to think that people used to think they were speaking with a real human and even formed emotional attachments with it. I find it amusing that it was written in a programming language called MAD-SLIP! I think AI really came into its own as a serious research topic in the 1980s although there were initiatives before that. Shows how far we've come in all that time. Not!
I got into it in the 70s, and programs like SHRLDU were already considered "old hat"... but it really started to break out (in terms of hype, anyway) in the early 80s. But considering serious AI lies in the intersection of psychology, systems, linguistics, brain studies, philosophy, ... it really hasn't been that slow. Of course, two AI-winters (so far) hasn't helped - a side effect of the perpetual hype cycle.