This is done so that the processor does not use stale values due to caching.
When you access (regular) cached RAM, the processor can “remember” the value that you accessed. The next time you look at that same memory location, the processor will return the value it remembers without looking in RAM. This is caching.
If the content of the location can change without the processor knowing as could be the case if you have a memory mapped device (an FPGA returning some data packets for example), the processor could return the value is “remembered” from last time, which would be wrong.
To avoid this problem, you mark that address space as non-cacheable. This insures the processor does not try to remember the value.
Any memory region used for DMA or other hardware interactions should not be cached.
If a memory region is accessed by both hardware and software simultaneously (EX: hardware configuration register or scatter-gather list for DMA), this region must be defined as non-cached. For actual DMA, the memory buffer can be defined as cached, and in most cases, it is advisable for the buffer to be cached to allow the application level speedy access to that buffer. It’s the driver’s responsibility to flush/invalidate cache before passing the buffer to DMA or the application.
Small update, above must is not correct in case we have a specialized hardware, i.e. Cache Coherency Interconnect (CCI) which will synchronize access of various hardware blocks to memory.