You are touching a sore point…
Historically, computers were mainframes where a lot of distinct users launched sessions and process on the same physical machine. Unix-like systems (e.g. Linux), but also VMS and its relatives (and this family includes all Windows of the NT line, hence 2000, XP, Vista, 7, 8…), have been structured in order to support the mainframe model.
Thus, the hardware provides privilege levels. A central piece of the operating system is the kernel which runs at the highest privilege level (yes, I know there are subtleties with regards to virtualization) and manages the privilege levels. Applications run at a lower level and are forcibly prevented by the kernel from reading or writing each other’s memory. Applications obtain RAM by pages (typically 4 or 8 kB) from the kernel. An application which tries to access a page belonging to another application is blocked by the kernel, and severely punished (“segmentation fault”, “general protection fault”…).
When an application no longer needs a page (in particular when the application exits), the kernel takes control of the page and may give it to another process. Modern operating systems “blank” pages before giving them back, where “blanking” means “filling with zeros”. This prevents leaking data from one process to another. Note that Windows 95/98/Millenium did not blank pages, and leaks could occur… but these operating system were meant for a single user per machine.
Of course, there are ways to escape the wrath of the kernel: a few doorways are available to applications which have “enough privilege” (not the same kind of privileges than above). On a Linux system, this is ptrace(). The kernel allows one process to read and write the memory of the other, through ptrace(), provided that both processes run under the same user ID, or that the process which does the ptrace() is a “root” process. Similar functionality exists in Windows.
The bottom-line is that passwords in RAM are no safer than what the operating system allows. By definition, by storing some confidential data in the memory of a process, you are trusting the operating system for not giving it away to third parties. The OS is your friend, because if the OS is an enemy then you have utterly lost.
Now comes the fun part. Since the OS enforces a separation of process, many people have tried to find ways to pierce these defenses. And they found a few interesting things…
The “RAM” which the applications see is not necessarily true “memory”. The kernel is a master of illusions, and gives pages that do not necessarily exist. The illusion is maintained by swapping RAM contents with a dedicated space on the disk, where free space is present in larger quantities; this is called virtual memory. Applications need not be aware of it, because the kernel will bring back the pages when needed (but, of course, disk is much slower than RAM). An unfortunate consequence is that some data, purportedly held in RAM, makes it to a physical medium where it will stay until overwritten. In particular, it will stay there if the power is cut. This allows for attacks where the bad guy grabs the machine and runs away with it, to inspect the data later on. Or leakage can occur when a machine is decommissioned and sold on eBay, and the sysadmin forgot to wipe out the disk contents.
Linux provides a system called mlock() which prevents the kernel from sending some specific pages to the swap space. Since locking pages in RAM can deplete available RAM resources for other process, you need some privileges (root again) to use this function.
Hibernation brings back the same issues, with a vengeance. By nature, hibernation must write the whole RAM to the disk — this may include pages which were mlocked, and even the contents of the CPU registers. To avoid leaks through hibernation, you have to resort to drastic measures like encrypting the whole disk — this naturally implies typing the unlock password whenever you awake the machine.
The mainframe model assumes that it can run several process which are hostile to each other, and yet maintain perfect peace and isolation. Modern hardware makes that very difficult. When two process run on the same CPU, they share some resources, including cache memory; memory accesses are much faster in the cache than elsewhere, but cache size is very limited. This has been exploited to recover cryptographic keys used by one process, from another. Variants have been developed which use other cache-like resources, e.g. branch prediction in a CPU. While research on that subject concentrates on cryptographic keys, which are high-value secrets, it could really apply to just any data.
On a similar note, video cards can do Direct Memory Access. Whether DMA cannot be abused to read or write memory from other process depends on how well undocumented hardware, closed-source drivers and kernels collaborate to enforce the appropriate access controls. I would not bet my last shirt on it…
Conclusion: yes, when you store a password in RAM, you are trusting the OS for keeping that confidential. Yes, the task is hard, even nigh impossible on modern systems. If some data is highly confidential, you really should not use the mainframe model, and not allow potentially hostile entities to run their code on your machine.
(Which, by the way, means that hosted virtual machines and cloud computing cannot be ultimately safe. If you are serious about security, use dedicated hardware.)
I think the OS do his jobs and avoid processes to access other’s allocated memory. But I think this is somehow doable.
Yes, it is possible to access the memory of another process. On Windows, this amounts to having
SE_DEBUG_PRIVILEGE and using
ReadProcessMemory() to extract the information you want.
You can do the same thing from a Windows Driver, although it is a tad harder to get right due to some complications with what memory is currently paged in to the lower half.
In either case, you need to have access to an administrative account, or a process incorrectly assigned
SE_DEBUG_PRIVILEGE, or a process with this privilege that can be persuaded to do what you need.
So, it comes down to ensuring nobody can escalate to obtain these privileges. More realistically, we ensure only trusted users can have these privileges. If you have access to an administrative account, you can quite easily read a password straight out of another account’s processes’ memory.
Under linux, you can achieve the same thing with
ptrace() and the
You might ask why these functions exist in the first place? Actually, they’re incredibly handy for debugging processes. Conceivably, this is something an administrator user may wish to do. By constrast, normal users should not need to and should be isolated from each other.
This is why people have advised, for some time, that running everything under the Administrator account is generally not a great idea.
I work in the consumer electronics arena and security here is somewhat different than in the server environment. Here we have to assume that the product is in a hostile environment. So for subscriber management purposes keys are kept secure. The first line of defence is that the SoC has hidden registers that even the operating system can’t actually access, they are burnt in at manufacture time and chip-fuses are blown which prevent access. Also we don’t see keys ourselves because that would be insecure on the production floor, instead they are pre-packaged with a batch key that we don’t know, only the chip vendor and person who created the key knows that (the master key can be destroyed after use in the chip). Once the chip is loaded with secrets then it can be locked and never* unlocked.
If you can’t access the keys then how do you decrypt anything? With a cryptographic co-processor on the SoC you can load key positions without actually knowing the value inside. You also don’t see the microcode of the crypto-processor, ever, because then even at the time of manufacture you can’t inject anything.
If you have keys or certs that won’t fit into the generous chip registers then you have to store them in RAM and/or NVM, but because of the crypto-processor you don’t need to expose those values. The RAM or NVM itself can be scrambled by the chip with a key which is not known by anyone but itself.
Lastly unlike in computers, secure embedded systems also have some physical security. RAM connection tracks aren’t permitted to be on the surface of the PCB (“buried vias”). This is because if there are elements which are in the clear in RAM then you need to limit access, it is possible to slow down or freeze the CPU and then probe the RAM.
Finally for smart cards it has been possible to intercept the transactions between the SoC and the card. This is called “Card sharing”, the solution to this is to encrypt the transactions between the card and the SoC and bind them to each other so they can’t be swapped or shared.
I know that DRM/content security is unpopular with some people on the interwebs, but I thought I would share some high-level concepts from an industry which has some particular security requirements.