Comment on: Do you expect computers to move to a 128-bit architecture during your lifetime?
0 04 Dec 2016 16:48 u/CanIHazPhD in v/programmingComment on: Do you expect computers to move to a 128-bit architecture during your lifetime?
Thank for the read, looks interesting!
Comment on: Do you expect computers to move to a 128-bit architecture during your lifetime?
The AI angle didn't even cross my mind, thanks for the perspective!
Comment on: Do you expect computers to move to a 128-bit architecture during your lifetime?
The wider an operand becomes, the more time it takes to do the computation
I was wondering if this would lead to a loss of performance, I don't think the switch from 32 to 64 leaded to an important lose of performance, but I'm not so sure about that.
The part that it wouldn't help many people it's quite true, though.
Comment on: Do you expect computers to move to a 128-bit architecture during your lifetime?
This was part of my reasoning, I figured the switch happened due to the need to address more storage/memory and that the extra precision was just a bonus.
Do you expect computers to move to a 128-bit architecture during your lifetime?
1 0 comments 29 Nov 2016 23:40 u/CanIHazPhD (self.programming) in v/programmingComment on: With just how much numerical detail MATLAB hides in things like its optimization functions, I'm amazed there's no simple way to numerically differentiate a function at a single point.
I've been there. Anyway, you should code something like this for the future. You could try a program that calculates the derivative with several values of h and then average them for better accuracy (and a lot less speed), just be careful about very small values of h, as x+h and x-h could not be machine representable, thus giving you a large error when dividing by (2*h)
Comment on: With just how much numerical detail MATLAB hides in things like its optimization functions, I'm amazed there's no simple way to numerically differentiate a function at a single point.
Why dont you do it manually? something like
x=0; %or any other point where the function is well behaved
h=1e-5; %experiment with this parameter
d=(f(x+h)-f(x-h))/(2*h);
You could also use the forward or backwards difference, but those converge at a rate h, while the central difference converges at h^2
This is interesting. Never heard of it before! Do you think something like this could be implemented to go from 64-bit to 128-bit (or some other number)?