Im not entirely convinced this is accurate. I do see your point and i had not considered that there is no more training data to use, but at the end of the day our current ai is just pattern recognition. Hence, would you not be able to use a hybrid system where you set up billions of use cases(translate point a to point b, apply a force such that object a rolls a specified distance, set up a neural network using backpropogation with 3 hidden layers, etc) and then have two adversarial ais. One of which attempts to “solve” that use case by randomly trying stuff, and the other basically just says “youre not doing good enough and heres why”. Once your first is doing a good job with that very specific use case, index it. Now when people ask for that specifc use case or a larger problem that includes that use case, you dont even need AI. You just plug in the already solved solution. Now your code base becomes basically just AI filling out wvery possibly question on stack overflow.
Obviously this isnt actual coding with AI, at the end of the day youre still doing all the heavy lifting. Its effectively no different from how most coders code today, just steal code from stack overflow XD the only difference would be that stack overflow is basically filled with every conceivable question, and if youre isnt answered, you can just request that they set up a new set of ad ais to solve the new problem.
Secondarily, you are the first person to give me a solid reason as to why the current pardigm is unworkable. Despite my mediocre recall i have spent most of my life studying AI well before all this llm stuff, so i like to think i was at least well educated on the topic at one point. I appreciate your response. I am somewhat curious about what architecture changes need to be made to allow for actual problem solving. The entire point of a nerual network is to replicate the way we think, so why do current AIs only seem to be good at pattern recognition and not even the most basic of problem solving? Perhaps the architecture is fine, but we simply need to train up generational ais that specifically focus on problem solving instead of pattern recog?
Secondarily, you are the first person to give me a solid reason as to why the current paradigm is unworkable. Despite my mediocre recall I have spent most of my life studying AI well before all this LLM stuff, so I like to think I was at least well educated on the topic at one point.
Unfortunately it seems that your education was missing the foundations of deep learning. PAC learning is the current meta-framework, it’s been around for about four decades, and at its core is the idea that even the best learners are not guaranteed to learn the solution to a hard problem.
I am somewhat curious about what architecture changes need to be made to allow for actual problem solving.
First, convince us that humans are actual problem solvers. The question is begged; we want computers to be intelligent but we didn’t check whether humans were intelligent before deciding that we would learn intelligence from human-generated data.
Perhaps the architecture is fine, but we simply need to train up generational ais that specifically focus on problem solving instead of pattern recog?
I mean, the architecture clearly isn’t fine. We’re getting very clever results, but we are not seeing even basic reasoning.
It is entirely possible that AGI can be achieved within our lifetime. But there is substantial evidence that our current approach is a complete and total dead end.
Not to say that we won’t use pieces of today’s solution. Of course we will. But something unknown but also really important and necessary for AGI appears to be completely missing right now.
Okay, so the rest of this is just theory crafting based on logical reasoning, but id like to hear your take. Quickly googling, it shows that we have succesfully mapped the neurons in one millimeter of mouse brain, and it had about 200,000 cells (neural nodes). Thats a lot of neural nodes to emulate, let alone the connections. It would seem to me that its far easier to customize our hardware. Mossfets dont strike me as up to the task, so it would seem to me that the future of ai lies in growing actual neurons and training them. You would achieve a much higher neural density that way, and the work is already being done to make that tech feasible.
Mossfets dont strike me as up to the task, so it would seem to me that the future of ai lies in growing actual neurons and training them. You would achieve a much higher neural density that way, and the work is already being done to make that tech feasible.
I think you’ve got it exactly.
We either need to achieve an unprecedented density (possibly through some novel computation medium), or we need to find a few more incredibly clever computational shortcuts.
Im not entirely convinced this is accurate. I do see your point and i had not considered that there is no more training data to use, but at the end of the day our current ai is just pattern recognition. Hence, would you not be able to use a hybrid system where you set up billions of use cases(translate point a to point b, apply a force such that object a rolls a specified distance, set up a neural network using backpropogation with 3 hidden layers, etc) and then have two adversarial ais. One of which attempts to “solve” that use case by randomly trying stuff, and the other basically just says “youre not doing good enough and heres why”. Once your first is doing a good job with that very specific use case, index it. Now when people ask for that specifc use case or a larger problem that includes that use case, you dont even need AI. You just plug in the already solved solution. Now your code base becomes basically just AI filling out wvery possibly question on stack overflow.
Obviously this isnt actual coding with AI, at the end of the day youre still doing all the heavy lifting. Its effectively no different from how most coders code today, just steal code from stack overflow XD the only difference would be that stack overflow is basically filled with every conceivable question, and if youre isnt answered, you can just request that they set up a new set of ad ais to solve the new problem.
Secondarily, you are the first person to give me a solid reason as to why the current pardigm is unworkable. Despite my mediocre recall i have spent most of my life studying AI well before all this llm stuff, so i like to think i was at least well educated on the topic at one point. I appreciate your response. I am somewhat curious about what architecture changes need to be made to allow for actual problem solving. The entire point of a nerual network is to replicate the way we think, so why do current AIs only seem to be good at pattern recognition and not even the most basic of problem solving? Perhaps the architecture is fine, but we simply need to train up generational ais that specifically focus on problem solving instead of pattern recog?
Unfortunately it seems that your education was missing the foundations of deep learning. PAC learning is the current meta-framework, it’s been around for about four decades, and at its core is the idea that even the best learners are not guaranteed to learn the solution to a hard problem.
First, convince us that humans are actual problem solvers. The question is begged; we want computers to be intelligent but we didn’t check whether humans were intelligent before deciding that we would learn intelligence from human-generated data.
I mean, the architecture clearly isn’t fine. We’re getting very clever results, but we are not seeing even basic reasoning.
It is entirely possible that AGI can be achieved within our lifetime. But there is substantial evidence that our current approach is a complete and total dead end.
Not to say that we won’t use pieces of today’s solution. Of course we will. But something unknown but also really important and necessary for AGI appears to be completely missing right now.
Okay, so the rest of this is just theory crafting based on logical reasoning, but id like to hear your take. Quickly googling, it shows that we have succesfully mapped the neurons in one millimeter of mouse brain, and it had about 200,000 cells (neural nodes). Thats a lot of neural nodes to emulate, let alone the connections. It would seem to me that its far easier to customize our hardware. Mossfets dont strike me as up to the task, so it would seem to me that the future of ai lies in growing actual neurons and training them. You would achieve a much higher neural density that way, and the work is already being done to make that tech feasible.
Basically, do you think its a hardware issue?
I think you’ve got it exactly.
We either need to achieve an unprecedented density (possibly through some novel computation medium), or we need to find a few more incredibly clever computational shortcuts.