Skip to main content
I added the JSON infos. The input should be in JSON. The curl requests I do to post the passwords using the API contains the data in JSON.
Source Link

I have to migrate a password database (Keepass) by using a csv file of it, to a new application by using its API. I do know how to use theThe API of the new applicationis updated with post requests, those needs JSON data format. What i What I need to do is to use the KeePass CSV of Keepass to export the passwords and other pieces of information linked to it to the new applicationAPI. So I am doingdecided todo a bash script using somebash and awk.  

The columns of the csv file are arranged like this :

The field "notes""notes" is multiline because some of the comments have line breaks ininto them. Like this:

"That's an important note, <br/> some extra infos <br/> concerning a password" 
 

(I'm not sure if it is Here's an example of the best wayAPI request to do it)post the data, the data field is in JSON format :

I didn't put all the needed fields on this request but you can already see how it would work. Some of the field names are different because the KeePass and the API fields name are differently made

var1=name var2=my.name var3=password456 curl -s --request PUT -u username123:password123 -H 'Content-Type: application/json; charset=utf-8' https://tpm.mydomain.com/index.php/api/v5/passwords/1659.json --data-binary @- <<DATA { "name": "$var1", "username": "$var2", "password": "$var3" } DATA 

I have planned to parse my csvCSV file field by field and then when I finish, to parse the row, I do my API request to post the password in the database and then. Then I do this again for every rowremaining rows.

To process the CSV I find the AWK language, it seems very handy and quite useful for my situation. I've come with multiple testing on my file with the gsub command, helping me to replace the line breaks (\n). They work but I don't really know how to go further. Here's Here's some of them (only the first work) :

cat keepass.csv | awk NF=NF RS=/\n/ OFS=\n cat keepass.csv |awk 'BEGIN {RS=","}{gsub("/\n/","µ""",$0); print $0}' cat keepass.csv | awk 'BEGIN {RS=""}{gsub(/\n/,"",$6); print $0}' 

What I am looking for, would be a command to parse any column of my csv by taking in account the multilines notes and input them into the global var of bash in JSON format.

I'm new to scripting, I think it can be done in only a few lines but I don't knowI'm unsure on how to proceed. I can change the language to python if needed and I can add some tools to my code.

I have to migrate a password database (Keepass) by using a csv file of it, to a new application by using its API. I do know how to use the API of the new application. What i need to do is to use the CSV of Keepass to export the passwords and other pieces of information linked to it to the new application. So I am doing a bash script using some awk. The columns of the csv file are arranged like this :

The field "notes" is multiline because some of the comments have line breaks in them. Like this:

"That's an important note, some extra infos concerning a password" 

(I'm not sure if it is the best way to do it) I have planned to parse my csv file field by field and then when I finish, to parse the row I do my API request to post the password in the database and then I do this again for every row.

To process the CSV I find the AWK language, it seems very handy and quite useful for my situation. I've come with multiple testing on my file with the gsub command, helping me to replace the line breaks (\n). They work but I don't know how to go further. Here's some of them (only the first work) :

cat keepass.csv | awk NF=NF RS=/\n/ OFS=\n cat keepass.csv |awk 'BEGIN {RS=","}{gsub("/\n/","µ",$0); print $0}' cat keepass.csv | awk 'BEGIN {RS=""}{gsub(/\n/,"",$6); print $0}' 

What I am looking for, would be a command to parse any column of my csv by taking in account the multilines notes and input them into the global var of bash.

I'm new to scripting, I think it can be done in only a few lines but I don't know how to proceed.

I have to migrate a password database (Keepass) by using a csv file of it, to a new application by using its API. The API is updated with post requests, those needs JSON data format. What I need to do is to use the KeePass CSV to export the passwords and other pieces of information linked to it to the API. I decided todo a script using bash and awk.  

The columns of the csv file are arranged like this :

The field "notes" is multiline because some of the comments have line breaks into them.

"That's an important note, <br/> some extra infos <br/> concerning a password" 
 

Here's an example of the API request to post the data, the data field is in JSON format :

I didn't put all the needed fields on this request but you can already see how it would work. Some of the field names are different because the KeePass and the API fields name are differently made

var1=name var2=my.name var3=password456 curl -s --request PUT -u username123:password123 -H 'Content-Type: application/json; charset=utf-8' https://tpm.mydomain.com/index.php/api/v5/passwords/1659.json --data-binary @- <<DATA { "name": "$var1", "username": "$var2", "password": "$var3" } DATA 

I have planned to parse my CSV file field by field and then when I finish to parse the row, I do my API request to post the password in the database. Then I do this for every remaining rows.

To process the CSV I find the AWK language, it seems very handy and quite useful for my situation. I've come with multiple testing on my file with the gsub command, helping me to replace the line breaks (\n). I don't really know how to go further. Here's some of them (only the first work :

cat keepass.csv | awk NF=NF RS=/\n/ OFS=\n cat keepass.csv |awk 'BEGIN {RS=","}{gsub("/\n/","",$0); print $0}' cat keepass.csv | awk 'BEGIN {RS=""}{gsub(/\n/,"",$6); print $0}' 

What I am looking for, would be a command to parse any column of my csv by taking in account the multilines notes and input them into the global var of bash in JSON format.

I'm new to scripting, I think it can be done in only a few lines but I'm unsure on how to proceed. I can change the language to python if needed and I can add some tools to my code.

added 3 characters in body
Source Link
terdon
  • 252.9k
  • 69
  • 481
  • 720

I have to migrate a password database (Keepass) by using a csv file of it, to a new application by using its API. I do know how to use the API of the new application. What i need to do is to use the CSV of Keepass to export the passwords and other pieces of information linked to it to the new application. So I am doing a bash script using some awk. The columns of the csv file are arranged like this :

"Group","Title","Username","Password","URL","Notes","TOTP","Icon","Last Modified","Created"

"Group","Title","Username","Password","URL","Notes","TOTP","Icon","Last Modified","Created" 

The fields "notes"field "notes" is multiline because some of the comments hashave line breaks into itin them. Like this:

"That's an important note,
some extra infos
concerning a password"

"That's an important note, some extra infos concerning a password" 

(I'm not sure if it is the best way to do it) I have planned to parse my csv file field by field and then when I finishedfinish, to parse the row I do my API request to post the password in the database and then I do this again for every row.

To process the CSV I find the AWK language, it seems very handy and quite useful for my situation. I've come with muliplemultiple testing on my file with the gsub command, helping me to replace the line breaks (\n). They work but I don't know how to go further. Here's some of them (only the first work) :

cat keepass.csv | awk NF=NF RS=/\n/ OFS=\n cat keepass.csv |awk 'BEGIN {RS=","}{gsub("/\n/","µ",$0); print $0}' cat keepass.csv | awk 'BEGIN {RS=""}{gsub(/\n/,"",$6); print $0}' 

I also know that you can share bash var by adding -v after awk. Here's the closest code I could have.

awk -v RS='"\n' -v FPAT='"[^"]*"|[^,]*' '{ print "Row n°", NR, "" for (i=1; i<=NF; i++) { sub(/^"/, "", $i) printf "Field %d, value=[%s]\n", i, $i }} keepass.csv 

What I am looking for, would be a command to parse any column of my csv by taking in account the multilines notes and input them into the global var of bash.

I think you need to structure it by doing something like :

awk -v 'BEGIN{parsing and replacing keeping '\n' of notes} if end of row, return boolean to bash for processing the API requests, wait, restart the loop}'' 

I'm new to scripting, I think it can be done in only a few lines but I don't know how to proceed.

I have to migrate a password database (Keepass) by using a csv file of it, to a new application by using its API. I do know how to use the API of the new application. What i need to do is to use the CSV of Keepass to export the passwords and other pieces of information linked to it to the new application. So I am doing a bash script using some awk. The columns of the csv file are arranged like this :

"Group","Title","Username","Password","URL","Notes","TOTP","Icon","Last Modified","Created"

The fields "notes" is multiline because some of the comments has line breaks into it. Like this:

"That's an important note,
some extra infos
concerning a password"

(I'm not sure if it is the best way to do it) I have planned to parse my csv file field by field and then when I finished to parse the row I do my API request to post the password in the database and then I do this again for every row.

To process the CSV I find the AWK language, it seems very handy and quite useful for my situation. I've come with muliple testing on my file with the gsub command, helping me to replace the line breaks (\n). They work but I don't know how to go further. Here's some of them (only the first work) :

cat keepass.csv | awk NF=NF RS=/\n/ OFS=\n cat keepass.csv |awk 'BEGIN {RS=","}{gsub("/\n/","µ",$0); print $0}' cat keepass.csv | awk 'BEGIN {RS=""}{gsub(/\n/,"",$6); print $0}' 

I also know that you can share bash var by adding -v after awk. Here's the closest code I could have.

awk -v RS='"\n' -v FPAT='"[^"]*"|[^,]*' '{ print "Row n°", NR, "" for (i=1; i<=NF; i++) { sub(/^"/, "", $i) printf "Field %d, value=[%s]\n", i, $i }} keepass.csv 

What I am looking for, would be a command to parse any column of my csv by taking in account the multilines notes and input them into the global var of bash.

I think you need to structure it by doing something like :

awk -v 'BEGIN{parsing and replacing keeping '\n' of notes} if end of row, return boolean to bash for processing the API requests, wait, restart the loop}'' 

I'm new to scripting, I think it can be done in only a few lines but I don't know how to proceed.

I have to migrate a password database (Keepass) by using a csv file of it, to a new application by using its API. I do know how to use the API of the new application. What i need to do is to use the CSV of Keepass to export the passwords and other pieces of information linked to it to the new application. So I am doing a bash script using some awk. The columns of the csv file are arranged like this :

"Group","Title","Username","Password","URL","Notes","TOTP","Icon","Last Modified","Created" 

The field "notes" is multiline because some of the comments have line breaks in them. Like this:

"That's an important note, some extra infos concerning a password" 

(I'm not sure if it is the best way to do it) I have planned to parse my csv file field by field and then when I finish, to parse the row I do my API request to post the password in the database and then I do this again for every row.

To process the CSV I find the AWK language, it seems very handy and quite useful for my situation. I've come with multiple testing on my file with the gsub command, helping me to replace the line breaks (\n). They work but I don't know how to go further. Here's some of them (only the first work) :

cat keepass.csv | awk NF=NF RS=/\n/ OFS=\n cat keepass.csv |awk 'BEGIN {RS=","}{gsub("/\n/","µ",$0); print $0}' cat keepass.csv | awk 'BEGIN {RS=""}{gsub(/\n/,"",$6); print $0}' 

I also know that you can share bash var by adding -v after awk. Here's the closest code I could have.

awk -v RS='"\n' -v FPAT='"[^"]*"|[^,]*' '{ print "Row n°", NR, "" for (i=1; i<=NF; i++) { sub(/^"/, "", $i) printf "Field %d, value=[%s]\n", i, $i }} keepass.csv 

What I am looking for, would be a command to parse any column of my csv by taking in account the multilines notes and input them into the global var of bash.

I think you need to structure it by doing something like :

awk -v 'BEGIN{parsing and replacing keeping '\n' of notes} if end of row, return boolean to bash for processing the API requests, wait, restart the loop}'' 

I'm new to scripting, I think it can be done in only a few lines but I don't know how to proceed.

Source Link

Parsing CSV with AWK, returning fields to bash var with line breaks

I have to migrate a password database (Keepass) by using a csv file of it, to a new application by using its API. I do know how to use the API of the new application. What i need to do is to use the CSV of Keepass to export the passwords and other pieces of information linked to it to the new application. So I am doing a bash script using some awk. The columns of the csv file are arranged like this :

"Group","Title","Username","Password","URL","Notes","TOTP","Icon","Last Modified","Created"

The fields "notes" is multiline because some of the comments has line breaks into it. Like this:

"That's an important note,
some extra infos
concerning a password"

(I'm not sure if it is the best way to do it) I have planned to parse my csv file field by field and then when I finished to parse the row I do my API request to post the password in the database and then I do this again for every row.

To process the CSV I find the AWK language, it seems very handy and quite useful for my situation. I've come with muliple testing on my file with the gsub command, helping me to replace the line breaks (\n). They work but I don't know how to go further. Here's some of them (only the first work) :

cat keepass.csv | awk NF=NF RS=/\n/ OFS=\n cat keepass.csv |awk 'BEGIN {RS=","}{gsub("/\n/","µ",$0); print $0}' cat keepass.csv | awk 'BEGIN {RS=""}{gsub(/\n/,"",$6); print $0}' 

I also know that you can share bash var by adding -v after awk. Here's the closest code I could have.

awk -v RS='"\n' -v FPAT='"[^"]*"|[^,]*' '{ print "Row n°", NR, "" for (i=1; i<=NF; i++) { sub(/^"/, "", $i) printf "Field %d, value=[%s]\n", i, $i }} keepass.csv 

What I am looking for, would be a command to parse any column of my csv by taking in account the multilines notes and input them into the global var of bash.

I think you need to structure it by doing something like :

awk -v 'BEGIN{parsing and replacing keeping '\n' of notes} if end of row, return boolean to bash for processing the API requests, wait, restart the loop}'' 

I'm new to scripting, I think it can be done in only a few lines but I don't know how to proceed.