"Mark Kent" <mark.kent@xxxxxxxxxxx> wrote in message
news:npkhp3-kpb.ln1@xxxxxxxxxxxxxxxxxxxxxxxxx
begin oe_protect.scr
Again, what's with this strange header?
Oliver Wong <owong@xxxxxxxxxxxxxx> espoused:
"Roy Schestowitz" <newsgroups@xxxxxxxxxxxxxxx> wrote in message
news:1475230.NGcxXIZqyJ@xxxxxxxxxxxxxxxxxx
__/ [ Oliver Wong ] on Monday 24 July 2006 17:52 \__
"Mark Kent" <mark.kent@xxxxxxxxxxx> wrote in message
news:7qeep3-h9o.ln1@xxxxxxxxxxxxxxxxxxxxxxxxx
It goes on to describe [...] superior methods for taking linux
forward to make it much more immune to such difficulties. Techniques
which have been successfully used are described, including encrypting
the running code in memory to make it much more difficult to exploit a
buffer overflow (the virus wouldn't be able to encrypt itself the
right
way).
Interestingly enough, this sounds a lot like a "whitehat"
application
of DRM and/or Trusted Computing.
[...]
This is not what the encryption is about. The encryption describes the
encryption of a programme /whilst in memory/ in some way, so that a
buffer overflow cannot be used. Once the programme has run, or the
machine is shut down, then this particular version disappears, and the
next run will have a totally different encryption key.
Yes, I understood what you're saying about encryption "whilst in
memory". The program might be stored in plaintext or already encrypted on
the harddisk, it doesn't matter for this discussion. When the program is
about to be run, it is loaded in memory and encrypted with a random key. I
got that part.
What you seemed to have elided over is that the CPU has to somehow
decrypt the program in order to run it. Without specialized hardware
support, you'd have to decrypt the program in memory, thus giving the
virus/malware/whatever a window of opportunity to infect, rendering this
technique useless. With specialized hardware support, you have a TC-enabled
CPU.
This is positively /nothing/ to do with so-called "trusted computing",
which is all about a third-party taking control of your computer. The
encryption described in New Scientist is all about _preventing_ a
third-party taking control of your computer.
Your amazing ability to read things ars*-about-face is reminiscent of
Erik F or Tim Smith.
http://en.wikipedia.org/wiki/Trusted_computing
<quote>
it means that it can be trusted more fully to follow its intended
programming with a lower possibility of inappropriate activities occurring
that are forbidden by its designers and other software writers.
[...]
Trusted computing encompasses four key technology concepts, of which all are
required for a fully trusted system.
1. Secure Input and Output
2. Memory curtaining / Protected execution
3. Sealed storage
4. Remote attestation
Secure I/O
Secure input and output (I/O) refers to a protected path between the
computer user and the software with which they believe they are interacting.
On current computer systems there are many ways for malicious software to
intercept data as it travels between a user and a software process - for
example keyboard loggers and screen-scrapers. Secure I/O reflects a hardware
and software protected and verified channel, using checksums to verify that
the software used to do the I/O has not been tampered with. Malicious
software injecting itself in this path could be identified.
Although protecting against software attacks, Secure I/O doesn't assist in
protection against hardware-based attack such as a key capture device
physically inserted between the user's keyboard and the computer.
Memory curtaining
Memory curtaining extends the current memory protection techniques to
provide full isolation of sensitive areas of memory — for example locations
containing cryptographic keys. Even the operating system doesn't have full
access to curtained memory, so the information would be secure from an
intruder who took control of the OS.
Sealed storage
Sealed storage protects private information by allowing it to be encrypted
using a key derived from the software and hardware being used. This means
the data can be read only by the same combination of software and hardware.
For example, users who keep a private diary on their computer do not want
other programs or other computers to be able to read it. Currently, a virus
can search for the diary, read it, and send it to someone else. The Sircam
virus did something similar to this. Even if the diary were protected by a
password, the virus might run a dictionary attack. Alternately the virus
might modify the user's diary software to have it leak the text once he
unlocked his or her diary. Using sealed storage, the diary is securely
encrypted so that only the unmodified diary program on his or her computer
can read it.
[...]
Possible applications for Trusted Computing
[...]
Virus and spyware
Digital signature of software will allow users to identify applications
modified by third parties that could add spyware to the software. For
example , some sites offer modified versions of MSN messenger that contain
spyware. The operating system could notice the lack of a valid signature for
these versions and inform the user that the program has been modified .
Trusted computing could also avoid attacks by virus, however Microsoft has
denied that this functionality will be present in its NGSCB architecture.
Trusted computing could also be used by antivirus vendors to write antivirus
softwares that can't be corrupted by virus attacks .
</quote>
Doesn't this sound like exactly what you were saying about encrypting
programs as they run in memory?
- Oliver
|
|