Page 1 of 1
Whoops!
Posted: 06 Aug 2012, 10:04
by SleeperService
I think this shows just how fragile and self defeating our financial system is
http://www.bbc.co.uk/news/business-19145584
In view of recent glitches elsewhere is it time to ask if there's a systemic failure waiting in the wings?
Posted: 06 Aug 2012, 10:24
by adam2
Yes.
It would be ironic in the extreme if the final crash was caused by an IT failure rather than by war, terrorism, sovereign default, or natural disaster.
IT failure that have serious consequences for large and well known firms do seem to be increasing.
Posted: 06 Aug 2012, 11:19
by PS_RalphW
I would hate to be the programmer that they pin this one on, but I would bet good money (or even peanuts) that they outsourced most of their systems development and kicked out most of their IT staff years ago.
Posted: 06 Aug 2012, 14:17
by kenneal - lagger
Just goes to show that we humans can make cockups but computes can make mega-cockups!!
Posted: 06 Aug 2012, 15:17
by JohnB
kenneal - lagger wrote:Just goes to show that we humans can make cockups but computes can make mega-cockups!!
But only because humans told them to do it, by giving them the wrong instructions, and/or giving them inappropriate tasks!
Posted: 06 Aug 2012, 19:48
by rue_d_etropal
Not 'ironic', very likely.
Many laughed at the 'Y2K' bug, especially as there were no problems, but there are other 'dates' which could still cause problems.
Combined with some who think that just because something is designed to do something, something won't go wrong, simply because computer programing is not just about logic, but also physics. Think how often you TV set top/satellite box has frozen and only way to get it to work is to switch off and on, similarly with broadband box.
Ho many computer chips are specified to be able to do something, but when it comes to reality they don't work that well. Not enough slack built into the system.
The above examples have affected relatively unimportant systems but it could be something more important.
I also wonder how many of current generation of IT workers can dig deep into code and stored info/data to see what is actually the problem.
It would be reassuring if some type of contingency plans were in place, even if it was building potential teams who could try to fix the big one when it hits. Most of us have not reached retirement age(although as far as most IT companies our years of real experience is not worth much), so could be wheeled out . Not too certain if I could read Hex code as well these days.
Posted: 06 Aug 2012, 20:07
by Little John
rue_d_etropal wrote:Not 'ironic', very likely.
Many laughed at the 'Y2K' bug, especially as there were no problems, but there are other 'dates' which could still cause problems.
Combined with some who think that just because something is designed to do something, something won't go wrong, simply because computer programing is not just about logic, but also physics. Think how often you TV set top/satellite box has frozen and only way to get it to work is to switch off and on, similarly with broadband box.
Ho many computer chips are specified to be able to do something, but when it comes to reality they don't work that well. Not enough slack built into the system.
The above examples have affected relatively unimportant systems but it could be something more important.
I also wonder how many of current generation of IT workers can dig deep into code and stored info/data to see what is actually the problem.
It would be reassuring if some type of contingency plans were in place, even if it was building potential teams who could try to fix the big one when it hits. Most of us have not reached retirement age(although as far as most IT companies our years of real experience is not worth much), so could be wheeled out . Not too certain if I could read Hex code as well these days.
The other thing to point out is that internal logic of a computing system is no indicator of the
validity of the output from that system. It only guarantees the
reliability of the output. Reliability and validity are entirely different things. The former requires consistent
processing of inputs inside the computing system. The latter requires that the inputs
themselves also valid.
Thus, computing systems can fail on either or both counts. Either the internal logic of the system is faulty due to faulty programming in which case this will produce unreliable as well as invalid outputs and/or the inputs are invalid in which case the outputs will be invalid even though they may well be reliable.
In the early days, when computer systems were relatively simple, most problems related to validity of inputs. However, as these systems have grown ever more complex, problems of reliability are becoming ever more complex and difficult to identify and remedy also.
Posted: 06 Aug 2012, 20:12
by DominicJ
THERE WAS A Y2K CRISIS
A program I wrote at college suffered from it, amongst many other flaws
Posted: 06 Aug 2012, 21:20
by JavaScriptDonkey
There would have been a Y2K crisis if lots of people hadn't worked hard to update systems (or just roll them back) before Y2K.
The world was never going to end but lots of little dumb systems were getting ready to display the wrong year.
Posted: 06 Aug 2012, 21:30
by RenewableCandy
JavaScriptDonkey wrote:There would have been a Y2K crisis if lots of people hadn't worked hard to update systems (or just roll them back) before Y2K.
Quite.
Meanwhile, d'you think the Financial System would start to work again if we turned it off and then on again
?
Posted: 06 Aug 2012, 21:54
by PS_RalphW
The military and aviation industries spend billions developing processors that are mathematically proven to have a logical design that meets specifications (and a physical reliability to match MTBF specs) and that code is written in languages like ADA which can be mathematically be proven to meet specifications, and the ADA compilers are written in a language that can be mathematically proven to compile code accurately.
Even then, the Turing halting problem means it is impossible to calculate how long (or if) an algorithm will take to complete , ecept by running it with every single possible set of input data. This is a problem if it needs to respond to events in the real world in finite time. Ditto any althorthim with dynamic allocation of memory in response to real time input data theoretically could run out of memory and crash.
Then, in the real world, even proven, realiable and well designed systems like autopilots have a nasty habit of crashing just when you need them most, like in the middle of a thunderstorm in the middle of the Atlantic in the middle of the night a thousand miles from land.
Posted: 06 Aug 2012, 21:56
by JohnB
DominicJ wrote:THERE WAS A Y2K CRISIS
A program I wrote at college suffered from it, amongst many other flaws
Some of the software I wrote for my business needed changing, but there was no crisis. The crisis would have happened if I hadn't fixed it.
Posted: 07 Aug 2012, 01:05
by madibe
The rescuers include financial firms Blackstone and TD Ameritrade.
It would be awful to think that corporate sabotage occured, wouldn't it?