Home » When to use /dev/random vs /dev/urandom

When to use /dev/random vs /dev/urandom



Use /dev/urandom for most practical purposes.

The longer answer depends on the flavour of Unix that you’re running.


Historically, /dev/random and /dev/urandom were introduced at the same time.

As @DavidSchwartz pointed out in a comment, using /dev/urandom is preferred in the vast majority of cases. He and others also provided a link to the excellent Myths about /dev/urandom article which I recommend for further reading.

In summary:

  • The manpage is misleading.
  • Both are fed by the same CSPRNG to generate randomness (diagrams 2 and 3)
  • /dev/random blocks when it runs out of entropy,
    so reading from /dev/random can halt process execution.
  • The amount of entropy is conservatively estimated, but not counted
  • /dev/urandom will never block.
  • In rare cases very shortly after boot, the CSPRNG may not have had enough entropy to be properly seeded and /dev/urandom may not produce high-quality randomness.
  • Entropy running low is not a problem if the CSPRNG was initially seeded properly.
  • The CSPRNG is being constantly re-seeded.
  • In Linux 4.8 and onward, /dev/urandom does not deplete the entropy pool (used by /dev/random) but uses the CSPRNG output from upstream.
  • Use /dev/urandom.

Exceptions to the rule

In the Cryptography Stack Exchange’s When to use /dev/random over /dev/urandom in Linux
@otus gives two use cases:

  1. Shortly after boot on a low entropy device, if enough entropy has not yet been generated to properly seed /dev/urandom.

  2. Generating a one-time pad with information theoretic security

If you’re worried about (1), you can check the entropy available in /dev/random.

If you’re doing (2) you’ll know it already 🙂

Note: You can check if reading from /dev/random will block, but beware of possible race conditions.

Alternative: use neither!

@otus also pointed out that the getrandom() system will read from /dev/urandom and only block if the initial seed entropy is unavailable.

There are issues with changing /dev/urandom to use getrandom(), but it is conceivable that a new /dev/xrandom device is created based upon getrandom().


It doesn’t matter, as
Wikipedia says:

macOS uses 160-bit Yarrow based on SHA1. There is no difference between /dev/random and /dev/urandom; both behave identically. Apple’s iOS also uses Yarrow.


It doesn’t matter, as Wikipedia says:

/dev/urandom is just a link to /dev/random and only blocks until properly seeded.

This means that after boot, FreeBSD is smart enough to wait until enough seed entropy has been gathered before delivering a never-ending stream of random goodness.


Use /dev/urandom, assuming your system has read at least once from /dev/random to ensure proper initial seeding.

The rnd(4) manpage says:

/dev/urandom never blocks.

/dev/random sometimes blocks. Will block early at boot if the
system’s state is known to be predictable.

Applications should read from /dev/urandom when they need randomly
generated data, e.g. cryptographic keys or seeds for simulations.

Systems should be engineered to judiciously read at least once from
/dev/random at boot before running any services that talk to the
internet or otherwise require cryptography, in order to avoid
generating keys predictably.

Traditionally, the only difference between /dev/urandom and /dev/random is what happens when kernel thinks there is no entropy in the system – /dev/random fails closed, /dev/urandom fails open. Both drivers were sourcing entropy from add_disk_randomness(), add_interrupt_randomness(), and add_input_randomness(). See /drivers/char/random.c for specifics.

Edited to add: As of Linux 4.8 /dev/urandom was reworked to use CSPRNG.

So when should you fail closed? For any kind of cryptographic use, specifically seeding DRBG. There is a very good paper explaining consequences of using /dev/urandom when generating RSA keys and not having enough entropy. Read Mining Your Ps and Qs.

This is somewhat of a “me too” answer, but it strengthens Tom Hale’s recommendation. It squarely applies to Linux.

  • Use /dev/urandom
  • Don’t use /dev/random

According to Theodore Ts’o on the Linux Kernel Crypto mailing list, /dev/random has been deprecated for a decade. From Re: [RFC PATCH v12 3/4] Linux Random Number Generator:

Practically no one uses /dev/random. It’s essentially a deprecated
interface; the primary interfaces that have been recommended for well
over a decade is /dev/urandom, and now, getrandom(2).

We regularly test /dev/random and it suffers frequent failures. The test performs the three steps: (1) drain /dev/random by asking for 10K bytes in non-blocking mode; (2) request 16 bytes in blocking mode (3) attempt to compress the block to see if its random (poor man’s test). The test takes minutes to complete.

The problem is so bad on Debain systems (i686, x86_64, ARM, and MIPS) we asked GCC Compile Farm to install the rng-tools package for their test machines. From Install rng-tools on gcc67 and gcc68:

I would like to request that rng-tools be installed on gcc67 and
gcc68. They are Debian systems, and /dev/random suffers entropy
depletion without rng-tools when torture testing libraries which
utilize the device.

The BSDs and OS X appear OK. The problem is definitely Linux.

It might also be worth mentioning Linux does not log generator failures. They did not want the entries filling up the system log. To date, most failures are silent and go undetected by most users.

The situation should be changing shortly since the kernel is going to print at least one failure message. From [PATCH] random: silence compiler warnings and fix race on the kernel crypto mailing list:

Specifically, I added depends on DEBUG_KERNEL. This means that these
useful warnings will only poke other kernel developers. This is probably
exactly what we want. If the various associated developers see a warning
coming from their particular subsystem, they’ll be more motivated to
fix it. Ordinary users on distribution kernels shouldn’t see the
warnings or the spam at all, since typically users aren’t using

I think it is a bad idea to suppress all messages from a security
engineering point of view.

Many folks don’t run debug kernels. Most of the users who want or need
to know of the issues won’t realize its happening. Consider, the
reason we learned of systemd’s problems was due to dmesg’s.

Suppressing all messages for all configurations cast a wider net than
necessary. Configurations that could potentially be detected and fixed
likely will go unnoticed. If the problem is not brought to light, then
it won’t be fixed.

I feel like the kernel is making policy decisions for some
organizations. For those who have hardware that is effectively
unfixable, then organization has to decide what to do based on their
risk adversity. They may decide to live with the risk, or they may
decide to refresh the hardware. However, without information on the
issue, they may not even realize they have an actionable item.

The compromise eventually reached later in the thread was at least one dmesg per calling module.

Related Solutions

Joining bash arguments into single string with spaces

[*] I believe that this does what you want. It will put all the arguments in one string, separated by spaces, with single quotes around all: str="'$*'" $* produces all the scripts arguments separated by the first character of $IFS which, by default, is a space....

AddTransient, AddScoped and AddSingleton Services Differences

TL;DR Transient objects are always different; a new instance is provided to every controller and every service. Scoped objects are the same within a request, but different across different requests. Singleton objects are the same for every object and every...

How to download package not install it with apt-get command?

Use --download-only: sudo apt-get install --download-only pppoe This will download pppoe and any dependencies you need, and place them in /var/cache/apt/archives. That way a subsequent apt-get install pppoe will be able to complete without any extra downloads....

What defines the maximum size for a command single argument?

Answers Definitely not a bug. The parameter which defines the maximum size for one argument is MAX_ARG_STRLEN. There is no documentation for this parameter other than the comments in binfmts.h: /* * These are the maximum length and maximum number of strings...

Bulk rename, change prefix

I'd say the simplest it to just use the rename command which is common on many Linux distributions. There are two common versions of this command so check its man page to find which one you have: ## rename from Perl (common in Debian systems -- Ubuntu, Mint,...

Output from ls has newlines but displays on a single line. Why?

When you pipe the output, ls acts differently. This fact is hidden away in the info documentation: If standard output is a terminal, the output is in columns (sorted vertically) and control characters are output as question marks; otherwise, the output is...

mv: Move file only if destination does not exist

mv -vn file1 file2. This command will do what you want. You can skip -v if you want. -v makes it verbose - mv will tell you that it moved file if it moves it(useful, since there is possibility that file will not be moved) -n moves only if file2 does not exist....

Is it possible to store and query JSON in SQLite?

SQLite 3.9 introduced a new extension (JSON1) that allows you to easily work with JSON data . Also, it introduced support for indexes on expressions, which (in my understanding) should allow you to define indexes on your JSON data as well. PostgreSQL has some...

Combining tail && journalctl

You could use: journalctl -u service-name -f -f, --follow Show only the most recent journal entries, and continuously print new entries as they are appended to the journal. Here I've added "service-name" to distinguish this answer from others; you substitute...

how can shellshock be exploited over SSH?

One example where this can be exploited is on servers with an authorized_keys forced command. When adding an entry to ~/.ssh/authorized_keys, you can prefix the line with command="foo" to force foo to be run any time that ssh public key is used. With this...

Why doesn’t the tilde (~) expand inside double quotes?

The reason, because inside double quotes, tilde ~ has no special meaning, it's treated as literal. POSIX defines Double-Quotes as: Enclosing characters in double-quotes ( "" ) shall preserve the literal value of all characters within the double-quotes, with the...

What is GNU Info for?

GNU Info was designed to offer documentation that was comprehensive, hyperlinked, and possible to output to multiple formats. Man pages were available, and they were great at providing printed output. However, they were designed such that each man page had a...

Set systemd service to execute after fstab mount

a CIFS network location is mounted via /etc/fstab to /mnt/ on boot-up. No, it is not. Get this right, and the rest falls into place naturally. The mount is handled by a (generated) systemd mount unit that will be named something like mnt-wibble.mount. You can...

Merge two video clips into one, placing them next to each other

To be honest, using the accepted answer resulted in a lot of dropped frames for me. However, using the hstack filter_complex produced perfectly fluid output: ffmpeg -i left.mp4 -i right.mp4 -filter_complex hstack output.mp4 ffmpeg -i input1.mp4 -i input2.mp4...

How portable are /dev/stdin, /dev/stdout and /dev/stderr?

It's been available on Linux back into its prehistory. It is not POSIX, although many actual shells (including AT&T ksh and bash) will simulate it if it's not present in the OS; note that this simulation only works at the shell level (i.e. redirection or...

How can I increase the number of inodes in an ext4 filesystem?

It seems that you have a lot more files than normal expectation. I don't know whether there is a solution to change the inode table size dynamically. I'm afraid that you need to back-up your data, and create new filesystem, and restore your data. To create new...

Why doesn’t cp have a progress bar like wget?

The tradition in unix tools is to display messages only if something goes wrong. I think this is both for design and practical reasons. The design is intended to make it obvious when something goes wrong: you get an error message, and it's not drowned in...