Just for fun I asked AI which languages it was best trained on

I asked AI
out of those 3 which languages are you trained on the best
F#
Elixir
Clojure

Claude Sonnet 3.7:

Out of F#, Elixir, and Clojure, I have the strongest knowledge of Clojure, followed by Elixir, and then F#.

Clojure is a Lisp dialect that runs on the JVM and has excellent support for functional programming and concurrency. I’m familiar with its syntax, core functions, immutable data structures, and its approach to state management.

For Elixir, I understand its functional programming model, pattern matching, processes, and the actor model it implements through the Erlang VM. I’m reasonably familiar with the Phoenix framework and how Elixir leverages OTP for building distributed systems.

My knowledge of F# is more limited compared to the other two. While I understand its basic syntax, type system, and functional programming concepts, I have less experience with its more advanced features and ecosystem.

If you’re interested in working with any of these languages, I’d be happy to help with specific examples or questions.

Copilot (Office Copilot):

I’m familiar with all three languages, but I don’t have a preference or specialization among them. Each language has its unique strengths:

  • F#: Known for its functional-first approach and strong type system, making it great for data processing and financial modeling.
  • Elixir: Built on the Erlang VM, it’s excellent for concurrent and distributed systems, often used in web development and real-time applications.
  • Clojure: A Lisp dialect that runs on the JVM, it’s praised for its simplicity and powerful macro system, ideal for data manipulation and functional programming.

Do you have a specific project or task in mind that you’re considering one of these languages for?

Gemini 2.5 Pro (experimental):

As a large language model, I don’t learn or get “better” at languages in the way a human does. My abilities are based on the vast amount of text and code data I was trained on. This data includes information about F#, Elixir, and Clojure from various sources like websites, books, and code repositories.

It’s difficult to definitively say which of the three I am “trained on the best” because:

  1. Data Volume: My training data is enormous, and while it contains information on all three, the exact proportion dedicated to each isn’t something I have access to. My proficiency likely reflects the general volume of high-quality public information and code available for each language.
  2. Quality of Data: The quality and diversity of the training examples (code snippets, documentation, discussions) also play a crucial role.

However, based on general trends in language usage and available online resources:

  • F#: Benefits from its strong ties to the well-documented and widely used .NET ecosystem.
  • Elixir: Has a very active community, particularly around the Phoenix framework, leading to substantial online resources and discussion.
  • Clojure: While perhaps having a smaller user base than the others in some metrics, it has a dedicated community and a significant body of work, especially given its Lisp heritage and JVM/CLJS platforms.

Conclusion:

I have been trained on substantial data for all three languages (F#, Elixir, and Clojure) and should be able to assist you effectively with tasks related to any of them, such as explaining concepts, generating code snippets, debugging, or discussing best practices.

My performance on a specific task might vary slightly depending on the complexity and the amount of relevant data I encountered during training, but I don’t have an inherent bias or significantly better capability in one over the others in a general sense.

Feel free to ask me questions about any of them!

4 Likes

Just generic AI nonsense garbage. Nothing to see here.

16 Likes

How do you interact with Gemini Pro btw?

Its free, i just selected it

These models are not conscious and are not able to directly reflect on what they are trained on. They do not experience training “as an experience”, they simply are their training. They cannot remember or reflect on being trained like you can as a human.

It is probably possible to align such a model to answer questions like this based on their inherent uncertainty, but this is not something that has been achieved to my knowledge.

Reasoning models are a step in that direction, though. For example if a model were to reason about your question, it may be able to generate enough context to “notice” that its knowledge of some languages is superior to others. I’m not sure if existing models are smart enough to piece that together, but it’s not out of the question.

But there is no “inherent” meta-knowledge embedded in these models, essentially by definition. They are trained on a snapshot which necessarily predates them.

These answers are almost certainly completely hallucinated nonsense.

11 Likes

Oh, it’s in their so-called “AI Studio”, I see. Thanks.

1 Like

I can attest that Claude Code and claude 3.7 sonnet under Cursor do fairly well with an Elixir/Phoenix/LiveView app. But, even with editor rules for it, I still add this to my prompt text:

“Perform analysis for a minimum change set. Do not make changes.”

This lets me review what it comes up with before I implement it with “Proceed with this set of changes.”

So you sit there for 3mins and type the best prompt into a box, then you copy paste the magic sentence, then you wait until AI has generated something, then you review that and maybe make changes?

How is that faster than just doing it yourself?

In any decent sized code base, there are more tables, files, and functions than a person may remember in their entirety. Sure. You have an idea where to look. But, you still have to go and make sure and then cross reference usage with other files to verify. This is especially true when you are new to the code base yourself. The AI can review it in minutes, less if you keep the product docs up to date (which it can also do for you). Thus, even if you want to type the changes yourself, which I sometimes do, it can often find the related sections to change faster than I can open the files and navigate to that part of the code. This is especially true when a change crosses multiple files. And, that is before I’ve made sure of the code I’m looking at and have begun making the changes.

I wrote about doing this in an initial development in a blog post here. In just a few days, I generated the equivalent output of several weeks of work. It took far longer to reason through what I wanted to create, how I wanted it created, and what behaviors to include or exclude than it did to generate it. After the initial development, it has since had a couple major enhancements. The first was full Stripe product catalog synch for both subscription plans and add-on products. The second was for adding a decent set of metrics/cost tracking, both for the simple cloud costs/month and the more complex AI model usage costing across five AI providers and four user tiers. Of course, as with any code generated either by AI or by humans, there’s the test/fix cycle afterwards which can often take more time than the initial implementation. That’s still true even with AI. The human time is the slowest part. Even with AI accelerating parts of it, development is still a process.

A more reliable answer would have cited peoples reviews of the llm’s performance assisting with such languages.