Can a virus destroy the BIOS of a modern computer? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)Is it possible to turn a computer into a bomb?is computrace a permanent backdoor?BIOS upgrade only with PGP-signature / encrypting the whole BIOSProtecting the BIOS from malwareUnlock a computer bios?Can Restarting An Infected Computer Make It Worse?Can HDD without OS contain active virusCan BIOS malware be installed from OS?Feasibility of infecting notebook BIOS with virus?Can BIOS/UEFI change OS code?Explain how a BIOS/UEFI infection may compromise the security of the Operating SystemIs knowing the BIOS password of help in hacking a computer *remotely*?
GDP with Intermediate Production
What is the "studentd" process?
Can you force honesty by using the Speak with Dead and Zone of Truth spells together?
How were pictures turned from film to a big picture in a picture frame before digital scanning?
RSA find public exponent
My mentor says to set image to Fine instead of RAW — how is this different from JPG?
Flight departed from the gate 5 min before scheduled departure time. Refund options
Relating to the President and obstruction, were Mueller's conclusions preordained?
Putting class ranking in CV, but against dept guidelines
Why do early math courses focus on the cross sections of a cone and not on other 3D objects?
Central Vacuuming: Is it worth it, and how does it compare to normal vacuuming?
Monty Hall Problem-Probability Paradox
How many time has Arya actually used Needle?
Printing attributes of selection in ArcPy?
Why not send Voyager 3 and 4 following up the paths taken by Voyager 1 and 2 to re-transmit signals of later as they fly away from Earth?
How to change the tick of the color bar legend to black
I can't produce songs
In musical terms, what properties are varied by the human voice to produce different words / syllables?
Why weren't discrete x86 CPUs ever used in game hardware?
Is there public access to the Meteor Crater in Arizona?
The Nth Gryphon Number
How much damage would a cupful of neutron star matter do to the Earth?
Sally's older brother
Found this skink in my tomato plant bucket. Is he trapped? Or could he leave if he wanted?
Can a virus destroy the BIOS of a modern computer?
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)Is it possible to turn a computer into a bomb?is computrace a permanent backdoor?BIOS upgrade only with PGP-signature / encrypting the whole BIOSProtecting the BIOS from malwareUnlock a computer bios?Can Restarting An Infected Computer Make It Worse?Can HDD without OS contain active virusCan BIOS malware be installed from OS?Feasibility of infecting notebook BIOS with virus?Can BIOS/UEFI change OS code?Explain how a BIOS/UEFI infection may compromise the security of the Operating SystemIs knowing the BIOS password of help in hacking a computer *remotely*?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
In the late 1990s, a computer virus known as CIH began infecting some computers. Its payload, when triggered, overwrote system information and destroyed the computer's BIOS, essentially bricking whatever computer it infected. Could a virus that affects modern operating systems (Like Windows 10) destroy the BIOS of a modern computer and essentially brick it the same way, or is it now impossible for a virus to gain access to a modern computer's BIOS?
malware virus operating-systems bios
add a comment |
In the late 1990s, a computer virus known as CIH began infecting some computers. Its payload, when triggered, overwrote system information and destroyed the computer's BIOS, essentially bricking whatever computer it infected. Could a virus that affects modern operating systems (Like Windows 10) destroy the BIOS of a modern computer and essentially brick it the same way, or is it now impossible for a virus to gain access to a modern computer's BIOS?
malware virus operating-systems bios
yes but from an attacker perspective it is a waste or resources... More info on a rootkit for UEFI as an example in the bellow paper... welivesecurity.com/wp-content/uploads/2018/09/ESET-LoJax.pdf
– Hugo
Apr 4 at 12:15
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 4 at 12:42
1
Some (or most?) desktop motherboards have a ROM used to recover the BIOS from some form of media (in the old days, floppy disks, these days, USB sticks, maybe cd-rom). The ROM can't be modified, however recovery usually requires opening the case and moving a jumper to boot into BIOS recovery mode. I don't know how laptops deal with this.
– rcgldr
Apr 4 at 16:11
1
Related: security.stackexchange.com/q/13105/165253
– forest
Apr 5 at 5:18
add a comment |
In the late 1990s, a computer virus known as CIH began infecting some computers. Its payload, when triggered, overwrote system information and destroyed the computer's BIOS, essentially bricking whatever computer it infected. Could a virus that affects modern operating systems (Like Windows 10) destroy the BIOS of a modern computer and essentially brick it the same way, or is it now impossible for a virus to gain access to a modern computer's BIOS?
malware virus operating-systems bios
In the late 1990s, a computer virus known as CIH began infecting some computers. Its payload, when triggered, overwrote system information and destroyed the computer's BIOS, essentially bricking whatever computer it infected. Could a virus that affects modern operating systems (Like Windows 10) destroy the BIOS of a modern computer and essentially brick it the same way, or is it now impossible for a virus to gain access to a modern computer's BIOS?
malware virus operating-systems bios
malware virus operating-systems bios
edited Apr 2 at 7:37
user73910
asked Apr 2 at 7:27
user73910user73910
544125
544125
yes but from an attacker perspective it is a waste or resources... More info on a rootkit for UEFI as an example in the bellow paper... welivesecurity.com/wp-content/uploads/2018/09/ESET-LoJax.pdf
– Hugo
Apr 4 at 12:15
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 4 at 12:42
1
Some (or most?) desktop motherboards have a ROM used to recover the BIOS from some form of media (in the old days, floppy disks, these days, USB sticks, maybe cd-rom). The ROM can't be modified, however recovery usually requires opening the case and moving a jumper to boot into BIOS recovery mode. I don't know how laptops deal with this.
– rcgldr
Apr 4 at 16:11
1
Related: security.stackexchange.com/q/13105/165253
– forest
Apr 5 at 5:18
add a comment |
yes but from an attacker perspective it is a waste or resources... More info on a rootkit for UEFI as an example in the bellow paper... welivesecurity.com/wp-content/uploads/2018/09/ESET-LoJax.pdf
– Hugo
Apr 4 at 12:15
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 4 at 12:42
1
Some (or most?) desktop motherboards have a ROM used to recover the BIOS from some form of media (in the old days, floppy disks, these days, USB sticks, maybe cd-rom). The ROM can't be modified, however recovery usually requires opening the case and moving a jumper to boot into BIOS recovery mode. I don't know how laptops deal with this.
– rcgldr
Apr 4 at 16:11
1
Related: security.stackexchange.com/q/13105/165253
– forest
Apr 5 at 5:18
yes but from an attacker perspective it is a waste or resources... More info on a rootkit for UEFI as an example in the bellow paper... welivesecurity.com/wp-content/uploads/2018/09/ESET-LoJax.pdf
– Hugo
Apr 4 at 12:15
yes but from an attacker perspective it is a waste or resources... More info on a rootkit for UEFI as an example in the bellow paper... welivesecurity.com/wp-content/uploads/2018/09/ESET-LoJax.pdf
– Hugo
Apr 4 at 12:15
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 4 at 12:42
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 4 at 12:42
1
1
Some (or most?) desktop motherboards have a ROM used to recover the BIOS from some form of media (in the old days, floppy disks, these days, USB sticks, maybe cd-rom). The ROM can't be modified, however recovery usually requires opening the case and moving a jumper to boot into BIOS recovery mode. I don't know how laptops deal with this.
– rcgldr
Apr 4 at 16:11
Some (or most?) desktop motherboards have a ROM used to recover the BIOS from some form of media (in the old days, floppy disks, these days, USB sticks, maybe cd-rom). The ROM can't be modified, however recovery usually requires opening the case and moving a jumper to boot into BIOS recovery mode. I don't know how laptops deal with this.
– rcgldr
Apr 4 at 16:11
1
1
Related: security.stackexchange.com/q/13105/165253
– forest
Apr 5 at 5:18
Related: security.stackexchange.com/q/13105/165253
– forest
Apr 5 at 5:18
add a comment |
9 Answers
9
active
oldest
votes
Modern computers don't have a BIOS, they have a UEFI. Updating the UEFI firmware from the running operating system is a standard procedure, so any malware which manages to get executed on the operating system with sufficient privileges could attempt to do the same. However, most UEFIs will not accept an update which isn't digitally signed by the manufacturer. That means it should not be possible to overwrite it with arbitrary code.
This, however, assumes that:
- the mainboard manufacturers manage to keep their private keys secret
- the UEFI doesn't have any unintended security vulnerabilities which allow overwriting it with arbitrary code or can otherwise be exploited to cause damage.
And those two assumptions do not necessarily hold.
Regarding leaked keys: if a UEFI signing key were to become known to the general public, then you can assume that there would be quite a lot of media reporting and hysterical patching going on. If you follow some IT news, you would likely see a lot of alarmist "If you have a [brand] mainboard UPDATE YOUR UEFI NOW!!!1111oneone" headlines. But another possibility is signing keys secretly leaked to state actors. So if your work might be interesting for industrial espionage, then this might also be a credible threat for you.
Regarding bugs: UEFIs gain more and more functionality which has more and more possibilities for hidden bugs. They also lack most of the internal security features you have after you have booted a "real" operating system.
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 5 at 8:03
add a comment |
Yes, it is definitely possible.
Nowadays, with UEFI becoming widespread, it is even more of a concern: UEFI has a much larger attack surface than traditional BIOS and a (potential) flaw in UEFI could be leverage to gain access to machine without having any kind of physical access (as demonstrated by the people of Eclypsium at black hat last year).
add a comment |
Practically speaking, a virus is software, so can do anything that any other software can do.
So the simple way answer to this question, and all others of the class "Can viruses do X?" is to ask "Does software currently do X?"
Such questions might include "can a virus walk my dog?" (not without a dog-walking robot); "Can a virus get me pizza?" (yes: this is regrettably not the main focus of most virus authors, however).
Are BIOSes (UEFI) currently updated using software? The answer is, yes they are. Mine updated last night, when I rebooted.
And so the answer is yes.
By the same logic, viruses can also cause (and historically have caused) physical damage to your CPU, hard drives, and printers.
Home automation systems and driverless vehicles are also possible targets for physical damages, but I know of no viruses which have done so.
2
I wouldn't mind much if my personal information was used by malware developers to order me free pizza and nothing else. (+1 for useful reasoning)
– Marc.2377
Apr 2 at 23:23
6
@Marc.2377, I would not mind much if your personal information was used to order me free pizza… :-)
– sleblanc
Apr 3 at 3:54
2
Modern viruses will have a very hard time causing physical damage. At most, they could wear down hardware a bit by running the CPU really hot, which shortens useful lifetime, but it's not common for it to be able to cause damage. In the past that wasn't the case though. See "the poke of death".
– forest
Apr 3 at 7:33
2
@forest Aren't the fans and cooling systems software controlled these days? I'm not sure, but I bet you could somehow foul the CPU or GPU fan from software. Russia destroyed generators remotely by toggling them on and off at a resonant frequency--I bet there are similar tricks that could kill your monitor pretty quickly. Platter hard drives can definitely be trashed by spinning them up and down repeatedly, solid state drives are vulnerable to repeated read/write cycles. I bet there is a lot a motivated hacker could do..
– Bill K
Apr 3 at 17:55
2
I think we'd need to define scope of "cause physical damage" before we decided if it was possible/plausible. If you constrain the definition to literally damaging the computer running the code, that's pretty narrow and I think @forest is right. If you include physical damage in a more general sense, it's much easier to imagine scenarios where an infected computer that's controlling something else (power plant, traffic lights, mass transit system, water treatment plant, etc) could easily cause major physical damage.
– dwizum
Apr 4 at 16:00
|
show 12 more comments
Yes, it is definitely possible.
Here is an example of a malware OS update fraudulently signed with the manufacturer's private key:
https://www.theregister.co.uk/2019/03/25/asus_software_update_utility_backdoor/
According to Kaspersky Labs, about a million Asus laptops were infected by Shadowhammer
, with an update that appeared to be correctly signed. It's not clear if that altered the firmware, but it certainly could have done.
add a comment |
Your question hints at a more deep subject that is rings and permissions of code on an operating system. On MS DOS the code could do whatever it wants. If the code wanted to write all 0x00's to a hard drive it could if it wanted to send strange output to a piece of hardware it could also there was nothing stopping the user's code. On a modern OS there is a concept of rings (this is enforced by the CPU). The kernel runs on ring zero and it could do whatever it wants. The user's code on the other hand can not. It runs on something called ring 3 and it is given it's own little piece of memory and inside of that memory it can do whatever it wants but it can not directly talk to hardware. If the user's code tries to talk to hardware then the kernel immediately kills the program. This means that it is highly unlikely that a regular virus can kill hardware because it can not talk to it directly.
If the kernel is hacked then the game is basically over. The kernel can do whatever it wants and a whole host of bad things can happen such as overclocking the CPU to a point where the hardware is unstable, wiping the hard drives (filling the with zeros for example), or pretty much any other plausible attack.
3
"If the user's code tries to talk to hardware then the kernel immediately kills the program" - Really? Can you provide a citation for that? I thought the protected instruction would simply fail and it's up to the program to deal with that reasonably or crash.
– Marc.2377
Apr 2 at 23:21
1
@Marc.2377 It is correct. If the user's code attempts to execute an instruction in CPL3 that requires CPL0 privileges, it will throw#GP(0)
(general protection fault, or GPF). This causes the code to jump into the kernel to see what signal handler was set up for that event. By default, the kernel will kill the process, though it's technically possible for the process to set up a signal handler for SIGSEGV, in which case the kernel resumes execution of the process at the location of the signal handler. It's generally not a good idea though because a process is considered to be in an...
– forest
Apr 3 at 7:20
...undefined state according to POSIX if execution resumes after a SIGSEGV has been raised that didn't come fromraise()
. It will resume execution at the failed instruction which will just run again and cause the process to lock up if the signal is ignored. So it can be up to the program to deal with it, if it sets up a signal handler for SIGSEGV, but there's pretty much never any situation where that would be done (though I think the Dolphin emulator catches segfaults for some sort of hacky optimization so it doesn't have to emulate some weird paging behavior and can rely on the MMU).
– forest
Apr 3 at 7:20
See this for a (rare) example of when it is up to the program. Or just read PoC||GTFO 6:3.
– forest
Apr 3 at 7:26
1
@forest Thanks a lot.
– Marc.2377
Apr 3 at 23:52
add a comment |
Potentially. It would be hard to do however, as it would more than likely have to masquerade as a legit BIOS update somewhere down the line. The method to do so will change depending on your mobo but chances are it would have to involve the leaking of private or hardware keys or other secrets.
add a comment |
Yes. It's hardware specific but here is one case of a user accidentally breaking their motherboard firmware from the OS level https://github.com/systemd/systemd/issues/2402
A bug in the firmware of an MSI laptop meant that clearing the efi variables caused the laptop to be unusable. Because these variables were exposed to the OS and mounted as a file, deleting every file from the OS level caused the issue which could be exploited by a virus to specifically target these variables.
add a comment |
There are many ways, and some of them are unsettling. For example, Computrace seems to be a permanent backdoor that can bypass not only the operating system but even the BIOS. And more generally, the Intel Management Engine has full control over your computer and can plausibly be exploited. These can modify your BIOS but do not even need to. Just in 2017, security researchers figured out how to exploit the Intel IME via USB to run unsigned code.
The point is that even if you have a completely secure operating system and you never download any insecure or malicious software, there is still a non-negligible possibility that you can be affected by a malware that bypasses all that by exploiting a security vulnerability in your hardware (even when your computer is supposedly powered off).
add a comment |
Something I haven seen here:
If the attacker gains sufficient permission to install even an official UEFI firmware, correctly signed by the system manufacturer, they can still potentially leave the computer in an un-bootable state by forcefully powering off the computer at an opportune time during the process.
The update code in modern firmwares usually tries to minimize the amount of time the computer spends in a state where a power failure will cause corruption of the firmware, and some firmwares even have a recovery mode which will activate in such a case.
However, many of these systems aren't completely bulletproof. Although they offer good protection against random power failures, a well-timed poweroff could still knock it dead if the firmware doesn't have a robust automatic recovery feature.
Also, one may not even need to attack the main system firmware. Pretty much every device in a modern PC has a firmware of some kind, and many of them can be updated via software. These devices are also often less secure. They may accept unsigned firmwares entirely, or at least be less resilient against malicious poweroffs during the update process.
If you destroy the firmware on the power controller, storage controller, storage device, video device, or input controller, the system is may become just as unusable as if you had attacked the UEFI.
add a comment |
protected by Gilles Apr 4 at 7:27
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
9 Answers
9
active
oldest
votes
9 Answers
9
active
oldest
votes
active
oldest
votes
active
oldest
votes
Modern computers don't have a BIOS, they have a UEFI. Updating the UEFI firmware from the running operating system is a standard procedure, so any malware which manages to get executed on the operating system with sufficient privileges could attempt to do the same. However, most UEFIs will not accept an update which isn't digitally signed by the manufacturer. That means it should not be possible to overwrite it with arbitrary code.
This, however, assumes that:
- the mainboard manufacturers manage to keep their private keys secret
- the UEFI doesn't have any unintended security vulnerabilities which allow overwriting it with arbitrary code or can otherwise be exploited to cause damage.
And those two assumptions do not necessarily hold.
Regarding leaked keys: if a UEFI signing key were to become known to the general public, then you can assume that there would be quite a lot of media reporting and hysterical patching going on. If you follow some IT news, you would likely see a lot of alarmist "If you have a [brand] mainboard UPDATE YOUR UEFI NOW!!!1111oneone" headlines. But another possibility is signing keys secretly leaked to state actors. So if your work might be interesting for industrial espionage, then this might also be a credible threat for you.
Regarding bugs: UEFIs gain more and more functionality which has more and more possibilities for hidden bugs. They also lack most of the internal security features you have after you have booted a "real" operating system.
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 5 at 8:03
add a comment |
Modern computers don't have a BIOS, they have a UEFI. Updating the UEFI firmware from the running operating system is a standard procedure, so any malware which manages to get executed on the operating system with sufficient privileges could attempt to do the same. However, most UEFIs will not accept an update which isn't digitally signed by the manufacturer. That means it should not be possible to overwrite it with arbitrary code.
This, however, assumes that:
- the mainboard manufacturers manage to keep their private keys secret
- the UEFI doesn't have any unintended security vulnerabilities which allow overwriting it with arbitrary code or can otherwise be exploited to cause damage.
And those two assumptions do not necessarily hold.
Regarding leaked keys: if a UEFI signing key were to become known to the general public, then you can assume that there would be quite a lot of media reporting and hysterical patching going on. If you follow some IT news, you would likely see a lot of alarmist "If you have a [brand] mainboard UPDATE YOUR UEFI NOW!!!1111oneone" headlines. But another possibility is signing keys secretly leaked to state actors. So if your work might be interesting for industrial espionage, then this might also be a credible threat for you.
Regarding bugs: UEFIs gain more and more functionality which has more and more possibilities for hidden bugs. They also lack most of the internal security features you have after you have booted a "real" operating system.
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 5 at 8:03
add a comment |
Modern computers don't have a BIOS, they have a UEFI. Updating the UEFI firmware from the running operating system is a standard procedure, so any malware which manages to get executed on the operating system with sufficient privileges could attempt to do the same. However, most UEFIs will not accept an update which isn't digitally signed by the manufacturer. That means it should not be possible to overwrite it with arbitrary code.
This, however, assumes that:
- the mainboard manufacturers manage to keep their private keys secret
- the UEFI doesn't have any unintended security vulnerabilities which allow overwriting it with arbitrary code or can otherwise be exploited to cause damage.
And those two assumptions do not necessarily hold.
Regarding leaked keys: if a UEFI signing key were to become known to the general public, then you can assume that there would be quite a lot of media reporting and hysterical patching going on. If you follow some IT news, you would likely see a lot of alarmist "If you have a [brand] mainboard UPDATE YOUR UEFI NOW!!!1111oneone" headlines. But another possibility is signing keys secretly leaked to state actors. So if your work might be interesting for industrial espionage, then this might also be a credible threat for you.
Regarding bugs: UEFIs gain more and more functionality which has more and more possibilities for hidden bugs. They also lack most of the internal security features you have after you have booted a "real" operating system.
Modern computers don't have a BIOS, they have a UEFI. Updating the UEFI firmware from the running operating system is a standard procedure, so any malware which manages to get executed on the operating system with sufficient privileges could attempt to do the same. However, most UEFIs will not accept an update which isn't digitally signed by the manufacturer. That means it should not be possible to overwrite it with arbitrary code.
This, however, assumes that:
- the mainboard manufacturers manage to keep their private keys secret
- the UEFI doesn't have any unintended security vulnerabilities which allow overwriting it with arbitrary code or can otherwise be exploited to cause damage.
And those two assumptions do not necessarily hold.
Regarding leaked keys: if a UEFI signing key were to become known to the general public, then you can assume that there would be quite a lot of media reporting and hysterical patching going on. If you follow some IT news, you would likely see a lot of alarmist "If you have a [brand] mainboard UPDATE YOUR UEFI NOW!!!1111oneone" headlines. But another possibility is signing keys secretly leaked to state actors. So if your work might be interesting for industrial espionage, then this might also be a credible threat for you.
Regarding bugs: UEFIs gain more and more functionality which has more and more possibilities for hidden bugs. They also lack most of the internal security features you have after you have booted a "real" operating system.
edited Apr 3 at 10:37
answered Apr 2 at 8:42
PhilippPhilipp
45.3k8116142
45.3k8116142
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 5 at 8:03
add a comment |
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 5 at 8:03
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 5 at 8:03
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 5 at 8:03
add a comment |
Yes, it is definitely possible.
Nowadays, with UEFI becoming widespread, it is even more of a concern: UEFI has a much larger attack surface than traditional BIOS and a (potential) flaw in UEFI could be leverage to gain access to machine without having any kind of physical access (as demonstrated by the people of Eclypsium at black hat last year).
add a comment |
Yes, it is definitely possible.
Nowadays, with UEFI becoming widespread, it is even more of a concern: UEFI has a much larger attack surface than traditional BIOS and a (potential) flaw in UEFI could be leverage to gain access to machine without having any kind of physical access (as demonstrated by the people of Eclypsium at black hat last year).
add a comment |
Yes, it is definitely possible.
Nowadays, with UEFI becoming widespread, it is even more of a concern: UEFI has a much larger attack surface than traditional BIOS and a (potential) flaw in UEFI could be leverage to gain access to machine without having any kind of physical access (as demonstrated by the people of Eclypsium at black hat last year).
Yes, it is definitely possible.
Nowadays, with UEFI becoming widespread, it is even more of a concern: UEFI has a much larger attack surface than traditional BIOS and a (potential) flaw in UEFI could be leverage to gain access to machine without having any kind of physical access (as demonstrated by the people of Eclypsium at black hat last year).
answered Apr 2 at 8:37
StephaneStephane
17.7k25464
17.7k25464
add a comment |
add a comment |
Practically speaking, a virus is software, so can do anything that any other software can do.
So the simple way answer to this question, and all others of the class "Can viruses do X?" is to ask "Does software currently do X?"
Such questions might include "can a virus walk my dog?" (not without a dog-walking robot); "Can a virus get me pizza?" (yes: this is regrettably not the main focus of most virus authors, however).
Are BIOSes (UEFI) currently updated using software? The answer is, yes they are. Mine updated last night, when I rebooted.
And so the answer is yes.
By the same logic, viruses can also cause (and historically have caused) physical damage to your CPU, hard drives, and printers.
Home automation systems and driverless vehicles are also possible targets for physical damages, but I know of no viruses which have done so.
2
I wouldn't mind much if my personal information was used by malware developers to order me free pizza and nothing else. (+1 for useful reasoning)
– Marc.2377
Apr 2 at 23:23
6
@Marc.2377, I would not mind much if your personal information was used to order me free pizza… :-)
– sleblanc
Apr 3 at 3:54
2
Modern viruses will have a very hard time causing physical damage. At most, they could wear down hardware a bit by running the CPU really hot, which shortens useful lifetime, but it's not common for it to be able to cause damage. In the past that wasn't the case though. See "the poke of death".
– forest
Apr 3 at 7:33
2
@forest Aren't the fans and cooling systems software controlled these days? I'm not sure, but I bet you could somehow foul the CPU or GPU fan from software. Russia destroyed generators remotely by toggling them on and off at a resonant frequency--I bet there are similar tricks that could kill your monitor pretty quickly. Platter hard drives can definitely be trashed by spinning them up and down repeatedly, solid state drives are vulnerable to repeated read/write cycles. I bet there is a lot a motivated hacker could do..
– Bill K
Apr 3 at 17:55
2
I think we'd need to define scope of "cause physical damage" before we decided if it was possible/plausible. If you constrain the definition to literally damaging the computer running the code, that's pretty narrow and I think @forest is right. If you include physical damage in a more general sense, it's much easier to imagine scenarios where an infected computer that's controlling something else (power plant, traffic lights, mass transit system, water treatment plant, etc) could easily cause major physical damage.
– dwizum
Apr 4 at 16:00
|
show 12 more comments
Practically speaking, a virus is software, so can do anything that any other software can do.
So the simple way answer to this question, and all others of the class "Can viruses do X?" is to ask "Does software currently do X?"
Such questions might include "can a virus walk my dog?" (not without a dog-walking robot); "Can a virus get me pizza?" (yes: this is regrettably not the main focus of most virus authors, however).
Are BIOSes (UEFI) currently updated using software? The answer is, yes they are. Mine updated last night, when I rebooted.
And so the answer is yes.
By the same logic, viruses can also cause (and historically have caused) physical damage to your CPU, hard drives, and printers.
Home automation systems and driverless vehicles are also possible targets for physical damages, but I know of no viruses which have done so.
2
I wouldn't mind much if my personal information was used by malware developers to order me free pizza and nothing else. (+1 for useful reasoning)
– Marc.2377
Apr 2 at 23:23
6
@Marc.2377, I would not mind much if your personal information was used to order me free pizza… :-)
– sleblanc
Apr 3 at 3:54
2
Modern viruses will have a very hard time causing physical damage. At most, they could wear down hardware a bit by running the CPU really hot, which shortens useful lifetime, but it's not common for it to be able to cause damage. In the past that wasn't the case though. See "the poke of death".
– forest
Apr 3 at 7:33
2
@forest Aren't the fans and cooling systems software controlled these days? I'm not sure, but I bet you could somehow foul the CPU or GPU fan from software. Russia destroyed generators remotely by toggling them on and off at a resonant frequency--I bet there are similar tricks that could kill your monitor pretty quickly. Platter hard drives can definitely be trashed by spinning them up and down repeatedly, solid state drives are vulnerable to repeated read/write cycles. I bet there is a lot a motivated hacker could do..
– Bill K
Apr 3 at 17:55
2
I think we'd need to define scope of "cause physical damage" before we decided if it was possible/plausible. If you constrain the definition to literally damaging the computer running the code, that's pretty narrow and I think @forest is right. If you include physical damage in a more general sense, it's much easier to imagine scenarios where an infected computer that's controlling something else (power plant, traffic lights, mass transit system, water treatment plant, etc) could easily cause major physical damage.
– dwizum
Apr 4 at 16:00
|
show 12 more comments
Practically speaking, a virus is software, so can do anything that any other software can do.
So the simple way answer to this question, and all others of the class "Can viruses do X?" is to ask "Does software currently do X?"
Such questions might include "can a virus walk my dog?" (not without a dog-walking robot); "Can a virus get me pizza?" (yes: this is regrettably not the main focus of most virus authors, however).
Are BIOSes (UEFI) currently updated using software? The answer is, yes they are. Mine updated last night, when I rebooted.
And so the answer is yes.
By the same logic, viruses can also cause (and historically have caused) physical damage to your CPU, hard drives, and printers.
Home automation systems and driverless vehicles are also possible targets for physical damages, but I know of no viruses which have done so.
Practically speaking, a virus is software, so can do anything that any other software can do.
So the simple way answer to this question, and all others of the class "Can viruses do X?" is to ask "Does software currently do X?"
Such questions might include "can a virus walk my dog?" (not without a dog-walking robot); "Can a virus get me pizza?" (yes: this is regrettably not the main focus of most virus authors, however).
Are BIOSes (UEFI) currently updated using software? The answer is, yes they are. Mine updated last night, when I rebooted.
And so the answer is yes.
By the same logic, viruses can also cause (and historically have caused) physical damage to your CPU, hard drives, and printers.
Home automation systems and driverless vehicles are also possible targets for physical damages, but I know of no viruses which have done so.
answered Apr 2 at 19:39
Dewi MorganDewi Morgan
1,280514
1,280514
2
I wouldn't mind much if my personal information was used by malware developers to order me free pizza and nothing else. (+1 for useful reasoning)
– Marc.2377
Apr 2 at 23:23
6
@Marc.2377, I would not mind much if your personal information was used to order me free pizza… :-)
– sleblanc
Apr 3 at 3:54
2
Modern viruses will have a very hard time causing physical damage. At most, they could wear down hardware a bit by running the CPU really hot, which shortens useful lifetime, but it's not common for it to be able to cause damage. In the past that wasn't the case though. See "the poke of death".
– forest
Apr 3 at 7:33
2
@forest Aren't the fans and cooling systems software controlled these days? I'm not sure, but I bet you could somehow foul the CPU or GPU fan from software. Russia destroyed generators remotely by toggling them on and off at a resonant frequency--I bet there are similar tricks that could kill your monitor pretty quickly. Platter hard drives can definitely be trashed by spinning them up and down repeatedly, solid state drives are vulnerable to repeated read/write cycles. I bet there is a lot a motivated hacker could do..
– Bill K
Apr 3 at 17:55
2
I think we'd need to define scope of "cause physical damage" before we decided if it was possible/plausible. If you constrain the definition to literally damaging the computer running the code, that's pretty narrow and I think @forest is right. If you include physical damage in a more general sense, it's much easier to imagine scenarios where an infected computer that's controlling something else (power plant, traffic lights, mass transit system, water treatment plant, etc) could easily cause major physical damage.
– dwizum
Apr 4 at 16:00
|
show 12 more comments
2
I wouldn't mind much if my personal information was used by malware developers to order me free pizza and nothing else. (+1 for useful reasoning)
– Marc.2377
Apr 2 at 23:23
6
@Marc.2377, I would not mind much if your personal information was used to order me free pizza… :-)
– sleblanc
Apr 3 at 3:54
2
Modern viruses will have a very hard time causing physical damage. At most, they could wear down hardware a bit by running the CPU really hot, which shortens useful lifetime, but it's not common for it to be able to cause damage. In the past that wasn't the case though. See "the poke of death".
– forest
Apr 3 at 7:33
2
@forest Aren't the fans and cooling systems software controlled these days? I'm not sure, but I bet you could somehow foul the CPU or GPU fan from software. Russia destroyed generators remotely by toggling them on and off at a resonant frequency--I bet there are similar tricks that could kill your monitor pretty quickly. Platter hard drives can definitely be trashed by spinning them up and down repeatedly, solid state drives are vulnerable to repeated read/write cycles. I bet there is a lot a motivated hacker could do..
– Bill K
Apr 3 at 17:55
2
I think we'd need to define scope of "cause physical damage" before we decided if it was possible/plausible. If you constrain the definition to literally damaging the computer running the code, that's pretty narrow and I think @forest is right. If you include physical damage in a more general sense, it's much easier to imagine scenarios where an infected computer that's controlling something else (power plant, traffic lights, mass transit system, water treatment plant, etc) could easily cause major physical damage.
– dwizum
Apr 4 at 16:00
2
2
I wouldn't mind much if my personal information was used by malware developers to order me free pizza and nothing else. (+1 for useful reasoning)
– Marc.2377
Apr 2 at 23:23
I wouldn't mind much if my personal information was used by malware developers to order me free pizza and nothing else. (+1 for useful reasoning)
– Marc.2377
Apr 2 at 23:23
6
6
@Marc.2377, I would not mind much if your personal information was used to order me free pizza… :-)
– sleblanc
Apr 3 at 3:54
@Marc.2377, I would not mind much if your personal information was used to order me free pizza… :-)
– sleblanc
Apr 3 at 3:54
2
2
Modern viruses will have a very hard time causing physical damage. At most, they could wear down hardware a bit by running the CPU really hot, which shortens useful lifetime, but it's not common for it to be able to cause damage. In the past that wasn't the case though. See "the poke of death".
– forest
Apr 3 at 7:33
Modern viruses will have a very hard time causing physical damage. At most, they could wear down hardware a bit by running the CPU really hot, which shortens useful lifetime, but it's not common for it to be able to cause damage. In the past that wasn't the case though. See "the poke of death".
– forest
Apr 3 at 7:33
2
2
@forest Aren't the fans and cooling systems software controlled these days? I'm not sure, but I bet you could somehow foul the CPU or GPU fan from software. Russia destroyed generators remotely by toggling them on and off at a resonant frequency--I bet there are similar tricks that could kill your monitor pretty quickly. Platter hard drives can definitely be trashed by spinning them up and down repeatedly, solid state drives are vulnerable to repeated read/write cycles. I bet there is a lot a motivated hacker could do..
– Bill K
Apr 3 at 17:55
@forest Aren't the fans and cooling systems software controlled these days? I'm not sure, but I bet you could somehow foul the CPU or GPU fan from software. Russia destroyed generators remotely by toggling them on and off at a resonant frequency--I bet there are similar tricks that could kill your monitor pretty quickly. Platter hard drives can definitely be trashed by spinning them up and down repeatedly, solid state drives are vulnerable to repeated read/write cycles. I bet there is a lot a motivated hacker could do..
– Bill K
Apr 3 at 17:55
2
2
I think we'd need to define scope of "cause physical damage" before we decided if it was possible/plausible. If you constrain the definition to literally damaging the computer running the code, that's pretty narrow and I think @forest is right. If you include physical damage in a more general sense, it's much easier to imagine scenarios where an infected computer that's controlling something else (power plant, traffic lights, mass transit system, water treatment plant, etc) could easily cause major physical damage.
– dwizum
Apr 4 at 16:00
I think we'd need to define scope of "cause physical damage" before we decided if it was possible/plausible. If you constrain the definition to literally damaging the computer running the code, that's pretty narrow and I think @forest is right. If you include physical damage in a more general sense, it's much easier to imagine scenarios where an infected computer that's controlling something else (power plant, traffic lights, mass transit system, water treatment plant, etc) could easily cause major physical damage.
– dwizum
Apr 4 at 16:00
|
show 12 more comments
Yes, it is definitely possible.
Here is an example of a malware OS update fraudulently signed with the manufacturer's private key:
https://www.theregister.co.uk/2019/03/25/asus_software_update_utility_backdoor/
According to Kaspersky Labs, about a million Asus laptops were infected by Shadowhammer
, with an update that appeared to be correctly signed. It's not clear if that altered the firmware, but it certainly could have done.
add a comment |
Yes, it is definitely possible.
Here is an example of a malware OS update fraudulently signed with the manufacturer's private key:
https://www.theregister.co.uk/2019/03/25/asus_software_update_utility_backdoor/
According to Kaspersky Labs, about a million Asus laptops were infected by Shadowhammer
, with an update that appeared to be correctly signed. It's not clear if that altered the firmware, but it certainly could have done.
add a comment |
Yes, it is definitely possible.
Here is an example of a malware OS update fraudulently signed with the manufacturer's private key:
https://www.theregister.co.uk/2019/03/25/asus_software_update_utility_backdoor/
According to Kaspersky Labs, about a million Asus laptops were infected by Shadowhammer
, with an update that appeared to be correctly signed. It's not clear if that altered the firmware, but it certainly could have done.
Yes, it is definitely possible.
Here is an example of a malware OS update fraudulently signed with the manufacturer's private key:
https://www.theregister.co.uk/2019/03/25/asus_software_update_utility_backdoor/
According to Kaspersky Labs, about a million Asus laptops were infected by Shadowhammer
, with an update that appeared to be correctly signed. It's not clear if that altered the firmware, but it certainly could have done.
answered Apr 3 at 6:50
emrys57emrys57
2112
2112
add a comment |
add a comment |
Your question hints at a more deep subject that is rings and permissions of code on an operating system. On MS DOS the code could do whatever it wants. If the code wanted to write all 0x00's to a hard drive it could if it wanted to send strange output to a piece of hardware it could also there was nothing stopping the user's code. On a modern OS there is a concept of rings (this is enforced by the CPU). The kernel runs on ring zero and it could do whatever it wants. The user's code on the other hand can not. It runs on something called ring 3 and it is given it's own little piece of memory and inside of that memory it can do whatever it wants but it can not directly talk to hardware. If the user's code tries to talk to hardware then the kernel immediately kills the program. This means that it is highly unlikely that a regular virus can kill hardware because it can not talk to it directly.
If the kernel is hacked then the game is basically over. The kernel can do whatever it wants and a whole host of bad things can happen such as overclocking the CPU to a point where the hardware is unstable, wiping the hard drives (filling the with zeros for example), or pretty much any other plausible attack.
3
"If the user's code tries to talk to hardware then the kernel immediately kills the program" - Really? Can you provide a citation for that? I thought the protected instruction would simply fail and it's up to the program to deal with that reasonably or crash.
– Marc.2377
Apr 2 at 23:21
1
@Marc.2377 It is correct. If the user's code attempts to execute an instruction in CPL3 that requires CPL0 privileges, it will throw#GP(0)
(general protection fault, or GPF). This causes the code to jump into the kernel to see what signal handler was set up for that event. By default, the kernel will kill the process, though it's technically possible for the process to set up a signal handler for SIGSEGV, in which case the kernel resumes execution of the process at the location of the signal handler. It's generally not a good idea though because a process is considered to be in an...
– forest
Apr 3 at 7:20
...undefined state according to POSIX if execution resumes after a SIGSEGV has been raised that didn't come fromraise()
. It will resume execution at the failed instruction which will just run again and cause the process to lock up if the signal is ignored. So it can be up to the program to deal with it, if it sets up a signal handler for SIGSEGV, but there's pretty much never any situation where that would be done (though I think the Dolphin emulator catches segfaults for some sort of hacky optimization so it doesn't have to emulate some weird paging behavior and can rely on the MMU).
– forest
Apr 3 at 7:20
See this for a (rare) example of when it is up to the program. Or just read PoC||GTFO 6:3.
– forest
Apr 3 at 7:26
1
@forest Thanks a lot.
– Marc.2377
Apr 3 at 23:52
add a comment |
Your question hints at a more deep subject that is rings and permissions of code on an operating system. On MS DOS the code could do whatever it wants. If the code wanted to write all 0x00's to a hard drive it could if it wanted to send strange output to a piece of hardware it could also there was nothing stopping the user's code. On a modern OS there is a concept of rings (this is enforced by the CPU). The kernel runs on ring zero and it could do whatever it wants. The user's code on the other hand can not. It runs on something called ring 3 and it is given it's own little piece of memory and inside of that memory it can do whatever it wants but it can not directly talk to hardware. If the user's code tries to talk to hardware then the kernel immediately kills the program. This means that it is highly unlikely that a regular virus can kill hardware because it can not talk to it directly.
If the kernel is hacked then the game is basically over. The kernel can do whatever it wants and a whole host of bad things can happen such as overclocking the CPU to a point where the hardware is unstable, wiping the hard drives (filling the with zeros for example), or pretty much any other plausible attack.
3
"If the user's code tries to talk to hardware then the kernel immediately kills the program" - Really? Can you provide a citation for that? I thought the protected instruction would simply fail and it's up to the program to deal with that reasonably or crash.
– Marc.2377
Apr 2 at 23:21
1
@Marc.2377 It is correct. If the user's code attempts to execute an instruction in CPL3 that requires CPL0 privileges, it will throw#GP(0)
(general protection fault, or GPF). This causes the code to jump into the kernel to see what signal handler was set up for that event. By default, the kernel will kill the process, though it's technically possible for the process to set up a signal handler for SIGSEGV, in which case the kernel resumes execution of the process at the location of the signal handler. It's generally not a good idea though because a process is considered to be in an...
– forest
Apr 3 at 7:20
...undefined state according to POSIX if execution resumes after a SIGSEGV has been raised that didn't come fromraise()
. It will resume execution at the failed instruction which will just run again and cause the process to lock up if the signal is ignored. So it can be up to the program to deal with it, if it sets up a signal handler for SIGSEGV, but there's pretty much never any situation where that would be done (though I think the Dolphin emulator catches segfaults for some sort of hacky optimization so it doesn't have to emulate some weird paging behavior and can rely on the MMU).
– forest
Apr 3 at 7:20
See this for a (rare) example of when it is up to the program. Or just read PoC||GTFO 6:3.
– forest
Apr 3 at 7:26
1
@forest Thanks a lot.
– Marc.2377
Apr 3 at 23:52
add a comment |
Your question hints at a more deep subject that is rings and permissions of code on an operating system. On MS DOS the code could do whatever it wants. If the code wanted to write all 0x00's to a hard drive it could if it wanted to send strange output to a piece of hardware it could also there was nothing stopping the user's code. On a modern OS there is a concept of rings (this is enforced by the CPU). The kernel runs on ring zero and it could do whatever it wants. The user's code on the other hand can not. It runs on something called ring 3 and it is given it's own little piece of memory and inside of that memory it can do whatever it wants but it can not directly talk to hardware. If the user's code tries to talk to hardware then the kernel immediately kills the program. This means that it is highly unlikely that a regular virus can kill hardware because it can not talk to it directly.
If the kernel is hacked then the game is basically over. The kernel can do whatever it wants and a whole host of bad things can happen such as overclocking the CPU to a point where the hardware is unstable, wiping the hard drives (filling the with zeros for example), or pretty much any other plausible attack.
Your question hints at a more deep subject that is rings and permissions of code on an operating system. On MS DOS the code could do whatever it wants. If the code wanted to write all 0x00's to a hard drive it could if it wanted to send strange output to a piece of hardware it could also there was nothing stopping the user's code. On a modern OS there is a concept of rings (this is enforced by the CPU). The kernel runs on ring zero and it could do whatever it wants. The user's code on the other hand can not. It runs on something called ring 3 and it is given it's own little piece of memory and inside of that memory it can do whatever it wants but it can not directly talk to hardware. If the user's code tries to talk to hardware then the kernel immediately kills the program. This means that it is highly unlikely that a regular virus can kill hardware because it can not talk to it directly.
If the kernel is hacked then the game is basically over. The kernel can do whatever it wants and a whole host of bad things can happen such as overclocking the CPU to a point where the hardware is unstable, wiping the hard drives (filling the with zeros for example), or pretty much any other plausible attack.
answered Apr 2 at 22:10
scifi6546scifi6546
491
491
3
"If the user's code tries to talk to hardware then the kernel immediately kills the program" - Really? Can you provide a citation for that? I thought the protected instruction would simply fail and it's up to the program to deal with that reasonably or crash.
– Marc.2377
Apr 2 at 23:21
1
@Marc.2377 It is correct. If the user's code attempts to execute an instruction in CPL3 that requires CPL0 privileges, it will throw#GP(0)
(general protection fault, or GPF). This causes the code to jump into the kernel to see what signal handler was set up for that event. By default, the kernel will kill the process, though it's technically possible for the process to set up a signal handler for SIGSEGV, in which case the kernel resumes execution of the process at the location of the signal handler. It's generally not a good idea though because a process is considered to be in an...
– forest
Apr 3 at 7:20
...undefined state according to POSIX if execution resumes after a SIGSEGV has been raised that didn't come fromraise()
. It will resume execution at the failed instruction which will just run again and cause the process to lock up if the signal is ignored. So it can be up to the program to deal with it, if it sets up a signal handler for SIGSEGV, but there's pretty much never any situation where that would be done (though I think the Dolphin emulator catches segfaults for some sort of hacky optimization so it doesn't have to emulate some weird paging behavior and can rely on the MMU).
– forest
Apr 3 at 7:20
See this for a (rare) example of when it is up to the program. Or just read PoC||GTFO 6:3.
– forest
Apr 3 at 7:26
1
@forest Thanks a lot.
– Marc.2377
Apr 3 at 23:52
add a comment |
3
"If the user's code tries to talk to hardware then the kernel immediately kills the program" - Really? Can you provide a citation for that? I thought the protected instruction would simply fail and it's up to the program to deal with that reasonably or crash.
– Marc.2377
Apr 2 at 23:21
1
@Marc.2377 It is correct. If the user's code attempts to execute an instruction in CPL3 that requires CPL0 privileges, it will throw#GP(0)
(general protection fault, or GPF). This causes the code to jump into the kernel to see what signal handler was set up for that event. By default, the kernel will kill the process, though it's technically possible for the process to set up a signal handler for SIGSEGV, in which case the kernel resumes execution of the process at the location of the signal handler. It's generally not a good idea though because a process is considered to be in an...
– forest
Apr 3 at 7:20
...undefined state according to POSIX if execution resumes after a SIGSEGV has been raised that didn't come fromraise()
. It will resume execution at the failed instruction which will just run again and cause the process to lock up if the signal is ignored. So it can be up to the program to deal with it, if it sets up a signal handler for SIGSEGV, but there's pretty much never any situation where that would be done (though I think the Dolphin emulator catches segfaults for some sort of hacky optimization so it doesn't have to emulate some weird paging behavior and can rely on the MMU).
– forest
Apr 3 at 7:20
See this for a (rare) example of when it is up to the program. Or just read PoC||GTFO 6:3.
– forest
Apr 3 at 7:26
1
@forest Thanks a lot.
– Marc.2377
Apr 3 at 23:52
3
3
"If the user's code tries to talk to hardware then the kernel immediately kills the program" - Really? Can you provide a citation for that? I thought the protected instruction would simply fail and it's up to the program to deal with that reasonably or crash.
– Marc.2377
Apr 2 at 23:21
"If the user's code tries to talk to hardware then the kernel immediately kills the program" - Really? Can you provide a citation for that? I thought the protected instruction would simply fail and it's up to the program to deal with that reasonably or crash.
– Marc.2377
Apr 2 at 23:21
1
1
@Marc.2377 It is correct. If the user's code attempts to execute an instruction in CPL3 that requires CPL0 privileges, it will throw
#GP(0)
(general protection fault, or GPF). This causes the code to jump into the kernel to see what signal handler was set up for that event. By default, the kernel will kill the process, though it's technically possible for the process to set up a signal handler for SIGSEGV, in which case the kernel resumes execution of the process at the location of the signal handler. It's generally not a good idea though because a process is considered to be in an...– forest
Apr 3 at 7:20
@Marc.2377 It is correct. If the user's code attempts to execute an instruction in CPL3 that requires CPL0 privileges, it will throw
#GP(0)
(general protection fault, or GPF). This causes the code to jump into the kernel to see what signal handler was set up for that event. By default, the kernel will kill the process, though it's technically possible for the process to set up a signal handler for SIGSEGV, in which case the kernel resumes execution of the process at the location of the signal handler. It's generally not a good idea though because a process is considered to be in an...– forest
Apr 3 at 7:20
...undefined state according to POSIX if execution resumes after a SIGSEGV has been raised that didn't come from
raise()
. It will resume execution at the failed instruction which will just run again and cause the process to lock up if the signal is ignored. So it can be up to the program to deal with it, if it sets up a signal handler for SIGSEGV, but there's pretty much never any situation where that would be done (though I think the Dolphin emulator catches segfaults for some sort of hacky optimization so it doesn't have to emulate some weird paging behavior and can rely on the MMU).– forest
Apr 3 at 7:20
...undefined state according to POSIX if execution resumes after a SIGSEGV has been raised that didn't come from
raise()
. It will resume execution at the failed instruction which will just run again and cause the process to lock up if the signal is ignored. So it can be up to the program to deal with it, if it sets up a signal handler for SIGSEGV, but there's pretty much never any situation where that would be done (though I think the Dolphin emulator catches segfaults for some sort of hacky optimization so it doesn't have to emulate some weird paging behavior and can rely on the MMU).– forest
Apr 3 at 7:20
See this for a (rare) example of when it is up to the program. Or just read PoC||GTFO 6:3.
– forest
Apr 3 at 7:26
See this for a (rare) example of when it is up to the program. Or just read PoC||GTFO 6:3.
– forest
Apr 3 at 7:26
1
1
@forest Thanks a lot.
– Marc.2377
Apr 3 at 23:52
@forest Thanks a lot.
– Marc.2377
Apr 3 at 23:52
add a comment |
Potentially. It would be hard to do however, as it would more than likely have to masquerade as a legit BIOS update somewhere down the line. The method to do so will change depending on your mobo but chances are it would have to involve the leaking of private or hardware keys or other secrets.
add a comment |
Potentially. It would be hard to do however, as it would more than likely have to masquerade as a legit BIOS update somewhere down the line. The method to do so will change depending on your mobo but chances are it would have to involve the leaking of private or hardware keys or other secrets.
add a comment |
Potentially. It would be hard to do however, as it would more than likely have to masquerade as a legit BIOS update somewhere down the line. The method to do so will change depending on your mobo but chances are it would have to involve the leaking of private or hardware keys or other secrets.
Potentially. It would be hard to do however, as it would more than likely have to masquerade as a legit BIOS update somewhere down the line. The method to do so will change depending on your mobo but chances are it would have to involve the leaking of private or hardware keys or other secrets.
answered Apr 2 at 8:13
520520
51524
51524
add a comment |
add a comment |
Yes. It's hardware specific but here is one case of a user accidentally breaking their motherboard firmware from the OS level https://github.com/systemd/systemd/issues/2402
A bug in the firmware of an MSI laptop meant that clearing the efi variables caused the laptop to be unusable. Because these variables were exposed to the OS and mounted as a file, deleting every file from the OS level caused the issue which could be exploited by a virus to specifically target these variables.
add a comment |
Yes. It's hardware specific but here is one case of a user accidentally breaking their motherboard firmware from the OS level https://github.com/systemd/systemd/issues/2402
A bug in the firmware of an MSI laptop meant that clearing the efi variables caused the laptop to be unusable. Because these variables were exposed to the OS and mounted as a file, deleting every file from the OS level caused the issue which could be exploited by a virus to specifically target these variables.
add a comment |
Yes. It's hardware specific but here is one case of a user accidentally breaking their motherboard firmware from the OS level https://github.com/systemd/systemd/issues/2402
A bug in the firmware of an MSI laptop meant that clearing the efi variables caused the laptop to be unusable. Because these variables were exposed to the OS and mounted as a file, deleting every file from the OS level caused the issue which could be exploited by a virus to specifically target these variables.
Yes. It's hardware specific but here is one case of a user accidentally breaking their motherboard firmware from the OS level https://github.com/systemd/systemd/issues/2402
A bug in the firmware of an MSI laptop meant that clearing the efi variables caused the laptop to be unusable. Because these variables were exposed to the OS and mounted as a file, deleting every file from the OS level caused the issue which could be exploited by a virus to specifically target these variables.
edited Apr 4 at 4:44
answered Apr 4 at 4:08
QwertieQwertie
28229
28229
add a comment |
add a comment |
There are many ways, and some of them are unsettling. For example, Computrace seems to be a permanent backdoor that can bypass not only the operating system but even the BIOS. And more generally, the Intel Management Engine has full control over your computer and can plausibly be exploited. These can modify your BIOS but do not even need to. Just in 2017, security researchers figured out how to exploit the Intel IME via USB to run unsigned code.
The point is that even if you have a completely secure operating system and you never download any insecure or malicious software, there is still a non-negligible possibility that you can be affected by a malware that bypasses all that by exploiting a security vulnerability in your hardware (even when your computer is supposedly powered off).
add a comment |
There are many ways, and some of them are unsettling. For example, Computrace seems to be a permanent backdoor that can bypass not only the operating system but even the BIOS. And more generally, the Intel Management Engine has full control over your computer and can plausibly be exploited. These can modify your BIOS but do not even need to. Just in 2017, security researchers figured out how to exploit the Intel IME via USB to run unsigned code.
The point is that even if you have a completely secure operating system and you never download any insecure or malicious software, there is still a non-negligible possibility that you can be affected by a malware that bypasses all that by exploiting a security vulnerability in your hardware (even when your computer is supposedly powered off).
add a comment |
There are many ways, and some of them are unsettling. For example, Computrace seems to be a permanent backdoor that can bypass not only the operating system but even the BIOS. And more generally, the Intel Management Engine has full control over your computer and can plausibly be exploited. These can modify your BIOS but do not even need to. Just in 2017, security researchers figured out how to exploit the Intel IME via USB to run unsigned code.
The point is that even if you have a completely secure operating system and you never download any insecure or malicious software, there is still a non-negligible possibility that you can be affected by a malware that bypasses all that by exploiting a security vulnerability in your hardware (even when your computer is supposedly powered off).
There are many ways, and some of them are unsettling. For example, Computrace seems to be a permanent backdoor that can bypass not only the operating system but even the BIOS. And more generally, the Intel Management Engine has full control over your computer and can plausibly be exploited. These can modify your BIOS but do not even need to. Just in 2017, security researchers figured out how to exploit the Intel IME via USB to run unsigned code.
The point is that even if you have a completely secure operating system and you never download any insecure or malicious software, there is still a non-negligible possibility that you can be affected by a malware that bypasses all that by exploiting a security vulnerability in your hardware (even when your computer is supposedly powered off).
edited Apr 4 at 11:57
answered Apr 4 at 11:45
user21820user21820
352313
352313
add a comment |
add a comment |
Something I haven seen here:
If the attacker gains sufficient permission to install even an official UEFI firmware, correctly signed by the system manufacturer, they can still potentially leave the computer in an un-bootable state by forcefully powering off the computer at an opportune time during the process.
The update code in modern firmwares usually tries to minimize the amount of time the computer spends in a state where a power failure will cause corruption of the firmware, and some firmwares even have a recovery mode which will activate in such a case.
However, many of these systems aren't completely bulletproof. Although they offer good protection against random power failures, a well-timed poweroff could still knock it dead if the firmware doesn't have a robust automatic recovery feature.
Also, one may not even need to attack the main system firmware. Pretty much every device in a modern PC has a firmware of some kind, and many of them can be updated via software. These devices are also often less secure. They may accept unsigned firmwares entirely, or at least be less resilient against malicious poweroffs during the update process.
If you destroy the firmware on the power controller, storage controller, storage device, video device, or input controller, the system is may become just as unusable as if you had attacked the UEFI.
add a comment |
Something I haven seen here:
If the attacker gains sufficient permission to install even an official UEFI firmware, correctly signed by the system manufacturer, they can still potentially leave the computer in an un-bootable state by forcefully powering off the computer at an opportune time during the process.
The update code in modern firmwares usually tries to minimize the amount of time the computer spends in a state where a power failure will cause corruption of the firmware, and some firmwares even have a recovery mode which will activate in such a case.
However, many of these systems aren't completely bulletproof. Although they offer good protection against random power failures, a well-timed poweroff could still knock it dead if the firmware doesn't have a robust automatic recovery feature.
Also, one may not even need to attack the main system firmware. Pretty much every device in a modern PC has a firmware of some kind, and many of them can be updated via software. These devices are also often less secure. They may accept unsigned firmwares entirely, or at least be less resilient against malicious poweroffs during the update process.
If you destroy the firmware on the power controller, storage controller, storage device, video device, or input controller, the system is may become just as unusable as if you had attacked the UEFI.
add a comment |
Something I haven seen here:
If the attacker gains sufficient permission to install even an official UEFI firmware, correctly signed by the system manufacturer, they can still potentially leave the computer in an un-bootable state by forcefully powering off the computer at an opportune time during the process.
The update code in modern firmwares usually tries to minimize the amount of time the computer spends in a state where a power failure will cause corruption of the firmware, and some firmwares even have a recovery mode which will activate in such a case.
However, many of these systems aren't completely bulletproof. Although they offer good protection against random power failures, a well-timed poweroff could still knock it dead if the firmware doesn't have a robust automatic recovery feature.
Also, one may not even need to attack the main system firmware. Pretty much every device in a modern PC has a firmware of some kind, and many of them can be updated via software. These devices are also often less secure. They may accept unsigned firmwares entirely, or at least be less resilient against malicious poweroffs during the update process.
If you destroy the firmware on the power controller, storage controller, storage device, video device, or input controller, the system is may become just as unusable as if you had attacked the UEFI.
Something I haven seen here:
If the attacker gains sufficient permission to install even an official UEFI firmware, correctly signed by the system manufacturer, they can still potentially leave the computer in an un-bootable state by forcefully powering off the computer at an opportune time during the process.
The update code in modern firmwares usually tries to minimize the amount of time the computer spends in a state where a power failure will cause corruption of the firmware, and some firmwares even have a recovery mode which will activate in such a case.
However, many of these systems aren't completely bulletproof. Although they offer good protection against random power failures, a well-timed poweroff could still knock it dead if the firmware doesn't have a robust automatic recovery feature.
Also, one may not even need to attack the main system firmware. Pretty much every device in a modern PC has a firmware of some kind, and many of them can be updated via software. These devices are also often less secure. They may accept unsigned firmwares entirely, or at least be less resilient against malicious poweroffs during the update process.
If you destroy the firmware on the power controller, storage controller, storage device, video device, or input controller, the system is may become just as unusable as if you had attacked the UEFI.
edited Apr 5 at 10:11
answered Apr 5 at 9:59
Lily FinleyLily Finley
65145
65145
add a comment |
add a comment |
protected by Gilles Apr 4 at 7:27
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
yes but from an attacker perspective it is a waste or resources... More info on a rootkit for UEFI as an example in the bellow paper... welivesecurity.com/wp-content/uploads/2018/09/ESET-LoJax.pdf
– Hugo
Apr 4 at 12:15
Comments are not for extended discussion; this conversation has been moved to chat.
– Rory Alsop♦
Apr 4 at 12:42
1
Some (or most?) desktop motherboards have a ROM used to recover the BIOS from some form of media (in the old days, floppy disks, these days, USB sticks, maybe cd-rom). The ROM can't be modified, however recovery usually requires opening the case and moving a jumper to boot into BIOS recovery mode. I don't know how laptops deal with this.
– rcgldr
Apr 4 at 16:11
1
Related: security.stackexchange.com/q/13105/165253
– forest
Apr 5 at 5:18