A central problem with this approach is that, at least in the form presented here, it doesn't guarantee accuracy. There will always be a probability (presumably low, but nevertheless non-zero) of error. Whilst this may be useful in some areas it's not obvious to me that it's always a desirable approach. How would you feel about a system that mostly got your bank balance correct, but occasionally got it wrong? And how about a system that, for example, monitors some mission-critical component? Would you feel happy with a system that "mostly" got things right when it comes to your heart monitor, or "mostly" spotted problematic objects in Spaceguard?
I don't deny that this is an interesting way of building on ideas in the cellular automaton tradition, but I'm not yet convinced that it can be exploited in a real world problem.
Yes, accuracy vs robustness. But I think it comes down to what you want to build.
The bank account, for example, could get "enough" accuracy if the chain was long enough. Imagine the accuracy of his Horde Demon Sorter if it were to extend thousands of cells in each direction. If a glitch were to occur, the frequency would be so low, that it could outweigh the benefit of precise computing. Yes there will be errors, but they wont crash the entire system like we are used to today.
Possibly. I think it would be very difficult to persuade senior management in many commercial firms to adopt this approach without a paradigm shift in the way corporations make their decisions. Imagine the conversation in the board room:
"The CTO advocates that we switch to this new system that is 99.9999% reliable. There will never be any hardware failures, but about 0.000001 of our bank balance calculations will be wrong."
"Sounds good. What's the reliability of the existing system?"
"Well, we aim for 100% accuracy in our bank balance calculations, but we have occasional hardware problems that prevent us from reporting any bank balances at all. This happens about once in 100,000 transactions."
"So you're offering us a choice between a system that goes wrong one time in a million, or a system that doesn't tell people anything at all one time in one hundred thousand?"
"Yes."
"We'll stick with the existing system. Failing to deliver any information is a better outcome than delivering incorrect information."
And this is before the legal officer steps in with questions about legal liability!
Absolutely. The 'suits' wouldn't approve anything like that.
But if we think ahead a little. What if the margin of error was more in the order of 1:10^100? What if they no longer had to care about backups and other redundancies?
Then it starts becoming a question of funding. If you spend $100M per year having techs/programmers on standby, and buying replacement hardware (yes, hardware would need to be replaced in a MFM system as well - but is done cheaper because of the modular design - I think), you'd probably be happy to hear that it could be cut in half with another system (pure speculation here).
I think the concept has merit - but a very poor language for our current x86 architecture.
Nanotech - The ability to recreate entire systems after damage will come in very handy for these little fellas - or just for general communication between them (think about how hard threading is today - now write a program that utilizes a million threads efficiently).
Aerospace - One of the issues today is slow communication, or no communication at all (because some circuit got 'to close to the sun' and got fried). With a MFM system, the satellite or whatever could simply bypass faulty hardware on the fly, and regenerate the routines that are now missing.
But it's a very different way of programming. Instead of 'paths' we must create 'rules'.
Perhaps in domains where verifying that an answer is correct is easy, but finding the answers is hard. Perhaps at the output nodes you verify that a result is correct, and if not, feed the data back in to the input layer.
My confusion with this idea is whether it is easy or even possible to express interesting problems, and whether it can solve problems efficiently. This example strikes me as an over-engineered bubble-sort. I can't tell if he is just trying to provide inspiration with fun toy examples, or if this is a sketch of a more concrete idea that would make more sense but would be too technical for the type of video he's trying to create.
The other thing is, if this is talking about distributed computing, it seems like it might be leaving a lot out. He doesn't talk about how the grid maps to actual compute resources. Or how nodes discover their neighbors and so on.
People always talk about new architectures and stuff but they never take off because they are very difficult to code for. He should come up with a better programming language for expressing this kind of machine.
17 comments
2 u/TelescopiumHerscheli 22 Jun 2016 23:57
A central problem with this approach is that, at least in the form presented here, it doesn't guarantee accuracy. There will always be a probability (presumably low, but nevertheless non-zero) of error. Whilst this may be useful in some areas it's not obvious to me that it's always a desirable approach. How would you feel about a system that mostly got your bank balance correct, but occasionally got it wrong? And how about a system that, for example, monitors some mission-critical component? Would you feel happy with a system that "mostly" got things right when it comes to your heart monitor, or "mostly" spotted problematic objects in Spaceguard?
I don't deny that this is an interesting way of building on ideas in the cellular automaton tradition, but I'm not yet convinced that it can be exploited in a real world problem.
0 u/psioniq [OP] 23 Jun 2016 00:26
Yes, accuracy vs robustness. But I think it comes down to what you want to build.
The bank account, for example, could get "enough" accuracy if the chain was long enough. Imagine the accuracy of his Horde Demon Sorter if it were to extend thousands of cells in each direction. If a glitch were to occur, the frequency would be so low, that it could outweigh the benefit of precise computing. Yes there will be errors, but they wont crash the entire system like we are used to today.
1 u/TelescopiumHerscheli 23 Jun 2016 01:01
Possibly. I think it would be very difficult to persuade senior management in many commercial firms to adopt this approach without a paradigm shift in the way corporations make their decisions. Imagine the conversation in the board room:
"The CTO advocates that we switch to this new system that is 99.9999% reliable. There will never be any hardware failures, but about 0.000001 of our bank balance calculations will be wrong."
"Sounds good. What's the reliability of the existing system?"
"Well, we aim for 100% accuracy in our bank balance calculations, but we have occasional hardware problems that prevent us from reporting any bank balances at all. This happens about once in 100,000 transactions."
"So you're offering us a choice between a system that goes wrong one time in a million, or a system that doesn't tell people anything at all one time in one hundred thousand?"
"Yes."
"We'll stick with the existing system. Failing to deliver any information is a better outcome than delivering incorrect information."
And this is before the legal officer steps in with questions about legal liability!
0 u/psioniq [OP] 23 Jun 2016 21:21
Absolutely. The 'suits' wouldn't approve anything like that.
But if we think ahead a little. What if the margin of error was more in the order of 1:10^100? What if they no longer had to care about backups and other redundancies?
Then it starts becoming a question of funding. If you spend $100M per year having techs/programmers on standby, and buying replacement hardware (yes, hardware would need to be replaced in a MFM system as well - but is done cheaper because of the modular design - I think), you'd probably be happy to hear that it could be cut in half with another system (pure speculation here).
I think the concept has merit - but a very poor language for our current x86 architecture.
0 u/psioniq [OP] 23 Jun 2016 21:31
Just to elaborate on where I think it has merit.
Nanotech - The ability to recreate entire systems after damage will come in very handy for these little fellas - or just for general communication between them (think about how hard threading is today - now write a program that utilizes a million threads efficiently).
Aerospace - One of the issues today is slow communication, or no communication at all (because some circuit got 'to close to the sun' and got fried). With a MFM system, the satellite or whatever could simply bypass faulty hardware on the fly, and regenerate the routines that are now missing.
But it's a very different way of programming. Instead of 'paths' we must create 'rules'.
2 u/onegin 23 Jun 2016 01:26
Perhaps in domains where verifying that an answer is correct is easy, but finding the answers is hard. Perhaps at the output nodes you verify that a result is correct, and if not, feed the data back in to the input layer.
My confusion with this idea is whether it is easy or even possible to express interesting problems, and whether it can solve problems efficiently. This example strikes me as an over-engineered bubble-sort. I can't tell if he is just trying to provide inspiration with fun toy examples, or if this is a sketch of a more concrete idea that would make more sense but would be too technical for the type of video he's trying to create.
The other thing is, if this is talking about distributed computing, it seems like it might be leaving a lot out. He doesn't talk about how the grid maps to actual compute resources. Or how nodes discover their neighbors and so on.
I guess I could take a look at the paper ...
1 u/Wonko_the_Sane 22 Jun 2016 18:01
hahso cool, some of the other videos by dave ackley are amazing. this one is beautiful...
1 u/psioniq [OP] 22 Jun 2016 18:07
Yes, there is some amazing stuff there. Watched it all twice. Not only an interesting concept, but a cool dude too.
0 u/psioniq [OP] 22 Jun 2016 16:35
This is really fascinating - do yourself a favor and code a small demo of this. :)
0 u/effusive_ermine 22 Jun 2016 19:48
We'll all be dead before anything practical can be done with this.
1 u/psioniq [OP] 22 Jun 2016 20:18
Yes, it'll require a new architecture (which he is working on too) - but look how far we have come since the Z1, from 1936.
0 u/effusive_ermine 22 Jun 2016 22:32
Okay. Have fun with that, fellas. It does look pretty neat.
0 u/psioniq [OP] 23 Jun 2016 02:48
Link to the project files can be found here.
0 u/blueingreen 21 Jul 2016 22:13
People always talk about new architectures and stuff but they never take off because they are very difficult to code for. He should come up with a better programming language for expressing this kind of machine.