Using AI large language models to assist with writing R code

Hi everyone,

This is not really a question but more an opportunity to open a discussion topic that I think is interesting; the use of AI large language models to assist in writing scripts / code.

I have been experimenting with this recently, because I got stuck on how to complete a shiny app that I am making.

The inspiration for this app was the (still ongoing) cholera outbreak in Haiti, where patients often didn’t know which health administrative districts they were resident in. Addresses often had spelling variations that made them difficult or impossible to automatically geolocate. However there was a need to see where clusters of patients were occurring geographically. Data entry clerks who knew the area were often able to pinpoint where patients lived on a map, but the information needed to be digitised so that it could be appended to other patient data and analysed.

To address this, I have been working on a Shiny app that will allow users to explore and identify locations of interest on an interactive leaflet map, click on a point, type a patient ID for that point in a pop-up dialogue box, view and download the ID and GPS coordinates for the selected points in a table. The .csv file with the table of results could then be uploaded and/or appended to a patient database or similar. With a little help from Stack Overflow, I was able to write this part of the app myself.

I also wanted the app to have a couple of other features:

  1. optional upload of a shapefile for health administrative district boundaries, that could be used to determine which district each point was in;
  2. 5 language buttons, one for each UN language, which would translate all the text in the app to the desired language when clicked.

This is where I started to struggle. Making the translation file with the shi.18y package is an arduous process (although worth it in the end). Reading a shapefile (and its component files) into a shiny app is technically challenging. I managed to get part of the way there using the shinyFiles package, but could not quite get it to work, and couldn’t figure out where the issue was, despite similar questions being asked before on Stack Overflow.

Coincidentally, a colleague mentioned how they had started to use chatGPT and Gemini/Bard to create R scripts. I was intrigued, but a bit sceptical at first - surely pulling together all the instructions is 75% of the work of creating the script? Given that I was stuck with my shiny app, I decided to give it a try.

Below is a brief account of my experiences so far:

First of all, I decided on a paragraph of text to describe what I wanted to do and submit to the AIs. I would then compare their outputs to my hand-written one and see if it got me any closer to a solution. This was my initial ‘question’:

Please create an R shiny app that allows users to type the name of a country in a search bar and shows them an interactive leaflet map of that country. Give users the option to superimpose polygons from a shapefile ontop of the leaflet map. Allow users to pan and zoom in to an area of interest on the map, click on a point, then type in an ID for that point in a pop-up dialogue box. Register and display the ID and GPS coordinates for the selected point in a table below the map. Allow the user to identify as many points as they like in this way and add each one in a new row to the table underneath the map. If the user has uploaded a shapefile with administrative boundaries, determine which polygon each point falls into and add the name of that polygon to a fourth column called region in the table. Let users download the results table when finished in a .csv file.

I tried the following AI large language models:

  • ChatGPT
  • Gemini/Bard
  • Perplexity

1. ChatGPT
ChatGPT really struggled. I had to amend down my question to only its most basic components, in order to get any output. The output did give some basic code to complete the first part of my task (interactive clickable map) but the point registering didn’t work properly. When I asked follow-up questions to try and improve it, I got an ‘I don’t understand’ response, so had to stop at that point.

2. Gemini/Bard
Gemini was a little better. I got further before it ran into difficulties, and I liked how it explained what it was doing below the code. However, my optional extras (uploading a shapefile and language translations) proved too difficult for Gemini to solve (at least the way that I asked the question).

3. Perplexity
I had not heard of Perplexity before another colleague mentioned trying it out for literature reviews. It seems that while ChatGPT will incorrectly combine information from separate contexts, Perplexity is less likely to do this. Perplexity also gives you all the sources it has used for a solution, and you can unclick any sources that you think are irrelevant or inappropriate.
Perplexity was able to solve the first part of my problem without issue.

Getting the shapefile upload to work was an interesting process. I found I had to prompt Perplexity with the names of the packages I thought it should use for specific tasks (like shinyFiles in this instance) so that it didn’t waste time with other approaches that I had already tried and knew didn’t work. I tested the code and managed to find out where the problem was; the file path being registered was incomplete - easily solved by adding here() to complete it. This was after several ‘follow-up’ questions, after which Perplexity would try and modify the solution, I would test it and eventually narrowed down to the line that wasn’t working. Feeding Perplexity with precise but non-specific R errors didn’t work as well as giving it my best guess as to what the error meant (which was interesting - I thought it would be the other way around since the sources were often S/O posts).

With the language translations, this is where Perplexity was very helpful in saving a lot of typing and back-and-forth to to translation platforms. Perplexity created the json file which the shi.18y package needed. This did not work seamlessly however; some elements are translated but some elements are not, and I’m still trying to figure out why.

I have noticed that when the ‘session’ gets very long, Perplexity gets tired and starts just applying the same solutions tried previously or modifying but truncating part of the code. The truncated code could be rectified with a follow-up question asking to put it back in.

Ultimately I aborted the session and started again, this time making my question more precise and including specific packages and functions that I wanted to use as well as explicitly stating what stumbling blocks to avoid. I’m on the fence about whether this strategy (supplying all the information at once in as much detail as possible) is better than building the questions a task at a time.

So far for me personally, I find it useful for shiny apps because Perplexity was able to order and activate the different elements in a way that I struggled to do solo and which cannot be solved by reading other people’s SO posts as it is very app specific. I also found the creation of the translation file to be very efficient (whether the translations are correct or not still needs to be tested though ;-). Another element I found useful were the explanations underneath the code - this did help me learn and seemed relatively accurate. I’m sure this is partially because the posting style in Stack Overflow is well enforced / most posts are high quality and explicit, which probably makes the AI’s task easier.

This is still a work in progress, but would be very curious to hear about others experiences of using AI in this way. I’ll post a link to the Github when it is all working.


Nice topic, @amy.mikhail .

I’ve been using GPT-4.5 mostly to enhance the aesthetics of my Shiny apps. I noticed it easily builds the CSS file, accurately tagging everything in the UI and applying very specific CSS attributes, a language I have little experience with.

I also noticed that GPT usually struggles with complex apps, but if it’s a simple app with direct instructions, it builds the structure very easily.

As a recent example, I developed an app two weeks ago, and the process was incredibly fast thanks to ChatGPT (few hours), which helped me with the general structure.

My app

The app is in Portuguese, and it monitors service requests related to animals of public health importance in the municipality of São Paulo, through citizen services.

Very nice to hear about your experience with other AI tools. I’ve never tried them, but I’ve been having a satisfactory experience with ChatGPT so far. I’m using the Pro version and have noticed a considerable improvement in the responses after subscribing. This might be worth mentioning as well.


Thanks for kickin off this super interesting discussion, Amy. Your cholera outbreak app sounds like a really useful tool!

I’ve been playin around with AI for R coding too, though mostly for smaller stuff. Your project seems like a great way to really put these LLMs through their paces for more complex app development.

For my own experimets, I use this thing called TypingMind - it’s a paid interface that lets me connect to a bunch of different LLMs via API. Pretty handy cuz I can try out models from OpenAI, Anthropic, Google, and others, but only pay for what I actually use.

I’m kinda curious which specific versions of the LLMs you tried out. Like, was it GPT-4 turbo or GPT-4o, Gemini Pro, Gemini Flash that sorta thing? That info could help us get a better sense of how they stack up for R coding.

Your approach of tweaking your prompts and giving more context as you go sounds smart. I’ve found that helps a ton too - the more specific we can be, the better results we usually get.

I’d be totally up for collaborating on your project if you want another set of eyes. Feel free to add me on GitHub (I’m temuulene) if you’re looking for a volunteer contributor. Either way, definitely share the link when it’s ready - I’d love to check it out!

1 Like

Hi Temuulen,

You can find my app here. There are currently two versions of the app, the one I manually created and the latest AI-generated version. You can find both in the inst/app subfolder. This issue explains which parts of the app are still not working properly (some text that is not being translated).

For all three AI platforms, I used the free versions:

As you say @lnielsen there seems to be a significant difference between the free and pro versions (at least for chatGPT and Google Gemini). The free version of Perplexity seems to function better than the other two.

Here are a couple of articles discussing the merits of different AI engines for coding / developer tasks: 1 2 3. Note that one of the models used by Perplexity is GPT 4.0 (which now has me confused as I thought that was only used by chatGPT pro version)? In any case, the key difference between chatGPT and Perplexity outlined by the article is that:

Perplexity is an AI search engine, and ChatGPT is a conversational AI chatbot.

Another one worth mentioning (because RStudio gives you the option to use it directly inside R scripts) is Github co-pilot. This video explains how you can use comments in your R script as prompts to generate code with github co-pilot (it is not free though).