Windows uses CRLF because it inherited it from MS-DOS.
MS-DOS uses CRLF because it was inspired by CP/M which was already using CRLF.
CP/M and many operating systems from the eighties and earlier used CRLF because it was the way to end a line printed on a teletype (return to the beginning of the line and jump to the next line, just like regular typewriters). This simplified printing a file because there was less or no pre-processing required. There was also mechanical requirements that prevented a single character to be usable. Some time might be required to allow the carriage to return and the platen to rotate.
Gnu/Linux uses LF because it is a Unix clone.1
Unix used a single character, LF, from the beginning to save space and standardize to a canonical end-of-line, using two characters was inefficient and ambiguous. This choice was inherited from Multics which used it as early as 1964. Memory, storage, CPU power and bandwidth were very sparse so saving one byte per line was worth doing. When a file was printed, the driver was converting the line feed (new-line) to the control characters required by the target device.
LF was preferred to CR because the latter still had a specific usage. By repositioning the printed character to the beginning of the same line, it allowed to overstrike already typed characters.
Apple initially decided to also use a single character but for some reason picked the other one: CR. When it switched to a BSD interface, it moved to LF.
These choices have nothing to do with the fact an OS is commercial or not.
1 This is the answer to your question.
The wikipedia article on “Newline” traces the choice of NL as a line terminator (or separator) to Multics in 1964; unfortunately the article has few citations to sources but there is no reason to doubt this is correct. There are two obvious benefits to this choice over CR-LF: space saving, and device independence.
The main alternative, CR-LF, originates in the control codes used to physically move the paper carriage on a teletype machine, where CR would return the carriage to its home position, and LF would rotate the paper roller to move the print position down one line. The two control characters appear in the ITA2 code which dates back to 1924 and which is apparently still in use (see Wikipedia); apparently ITA2 took them from the Murray variant of Baudot code which dates to 1901.
For younger readers it is worth noting that in the mainframe tradition, there was no newline character; rather a file was a sequence of records which were either fixed length (often 80 characters, based on punched cards) or variable length; variable length records were typically stored with a character count at the start of each record. If you have a mainframe file consisting of a sequence of variable length records each containing arbitrary binary content, converting this losslessly to a UNIX-style file can be a tricky conversion.
Linux, of course, was just a re-implementation of Unix, and Unix took many of its design decisions from Multics, so it looks like the key decision was made in 1964.
Other answers have traced the inheritance chain back to the 1960s, and teletypes. But here’s one aspect they didn’t cover.
In the days of teletypes, there were times when it was desirable to do something called overstriking. Overstriking was sometimes used to obscure a password, because erasing the password was just not doable. Other times, overstriking was done to get a symbol that was not in the font. For example, the letter O and a slash produce a new symbol.
Overstriking was acheived by putting in a carriage return with no line feed, athough backspace was sometimes used. For this reason, the unix people decided against carriage return as the line separator, and opted for line feed instead.
This also worked out well for reading texts produced using the CRLF convention. The CR gets swallowed, and the LF becomes the separator.