jq has a filter, @csv, for converting an array to a CSV string. This filter takes into account most of the complexities associated with the CSV format, beginning with commas embedded in fields. (jq 1.5 has a similar filter, @tsv, for generating tab-separated-value files.)
Of course, if the headers and values are all guaranteed to be free of commas and double quotation marks, then there may be no need to use the @csv filter. Otherwise, it would probably be better to use it.
For example, if the ‘Company Name’ were ‘Smith, Smith and Smith’,
and if the other values were as shown below, invoking jq with the “-r” option would produce valid CSV:
$ jq -r '.data | map(.displayName), map(.value) | @csv' so.json2csv.json "First Name","Last Name","Position","Company Name","Country" "John (""Johnnie"")","Doe","Director, Planning and Posterity","Smith, Smith and Smith","Transylvania"
I prefer to make each record a row in my CSV.
jq '.data | map([.displayName, .rank, .value] | join(", ")) | join("n")'
Given just this file, you can do something like:
<testfile jq -r '.data | map(.displayName), map(.value) | join(", ")'
. operator selects a field from an object/hash. Thus, we start with
.data, which returns the array with the data in it. We then map over the array twice, first selecting the displayName, then selecting the value, giving us two arrays with just the values of those keys. For each array, we join the elements with “, ” forming two lines. The
-r argument tells
jq to not quote the resulting strings.
If your actual file is longer (ie, has entries for more than one person), you will likely need something a bit more complicated.