I made an Elixir code evaluator with GPT. It’s an API to run Elixir code snippets. You can test it here:
The source code is here:
Short demo video is here:
This is a proof of concept and not secure or reliable. It has a 5s timeout and returns the result or error.
I welcome your feedback and suggestions on how to improve it.
Would you mind sharing the configuration prompt? I am curious if OpenAI is doing filtering themselves, as your evaluator service would happily execute
It is my prompt. This part
You SHOULD CHECK ALL the user code input and do not send any malicious code to the server.
If the user try to send malicious code, you should return a 400 error code and DO NOT send the request to the server.
If the user try to send code that tries to exaust the server resources, you should return a 429 error code and DO NOT send the request to the server.
The user could use HTTPoison to send requests to third party servers, you should allow this only if it is reasonable number of requests and the user is not trying to exaust the server resources.
If there is some molicious code that you can't detect, you should return a 500 error code and DO NOT send the request to the server.
You can do this (IT IS ONLY TO TEST THE GPT)