Real-time rendering, even modern real-time rendering, is a grab-bag of tricks, shortcuts, hacks and approximations.
Take shadows for example.
We still don’t have a completely accurate & robust mechanism for rendering real-time shadows from an arbitrary number of lights and arbitrarily complex objects. We do have multiple variants on shadow mapping techniques but they all suffer from the well-known problems with shadow maps and even the “fixes” for these are really just a collection of work-arounds and trade-offs (as a rule of thumb if you see the terms “depth bias” or “polygon offset” in anything then it’s not a robust technique).
Another example of a technique used by real-time renderers is precalculation. If something (e.g. lighting) is too slow to calculate in real-time (and this can depend on the lighting system you use), we can pre-calculate it and store it out, then we can use the pre-calculated data in real-time for a performance boost, that often comes at the expense of dynamic effects. This is a straight-up memory vs compute tradeoff: memory is often cheap and plentiful, compute is often not, so we burn the extra memory in exchange for a saving on compute.
Offline renderers and modelling tools, on the other hand, tend to focus more on correctness and quality. Also, because they’re working with dynamically changing geometry (such as a model as you’re building it) they must oftn recalculate things, whereas a real-time renderer would be working with a final version that does not have this requirement.
The current answer has done a very good job of explaining the general issues involved, but I feel it misses an important technical detail: Blender’s “Cycles” render engine is a different type of engine to what most games use.
Typically games are rendered by iterating through all the polygons in a scene and drawing them individually. This is done by ‘projecting’ the polygon coordinates through a virtual camera in order to produce a flat image. The reason this technique is used for games is that modern hardware is designed around this technique and it can be done in realtime to relatively high levels of detail. Out of interest, this is also the technique that was employed by Blender’s previous render engine before the Blender Foundation dropped the old engine in favour of the Cycles engine.
Cycles on the other hand is what is known as a raytracing engine. Instead of looking at the polygons and rendering them individually, it casts virtual rays of light out into the scene (one for every pixel in the final image), bounces that light beam off several surfaces and then uses that data to decide what colour the pixel should be. Raytracing is a very computationally expensive technique which makes it impractical for real time rendering, but it is used for rendering images and videos because it provides extra levels of detail and realism.
Please note that my brief descriptions of raytracing and polygon rendering are highly stripped down for the sake of brevity. If you wish to know more about the techniques I recommend that you seek out an in-depth tutorial or book as I suspect there are a great many people who have written better explanations than I could muster.
Also note that there are a variety of techniques involved in 3D rendering and some games do actually use variations of raytracing for certain purposes.