How does the Elixir community feel about the ethical complexities of AI, OSS, and learning/teaching?

How do you all feel about the use of AI tools for building Elixir/Phoenix projects, with respect to these topics? Longform context below. I get a strong sense of enthusiasm for these tools and the creativity they can bolster from this community, which is great ~ what’s harder to come by are discussions that range beyond the practical applications of these tools, from folks writing from a personal viewpoint (not a longform blog, which have become hard to take at face value on both sides of the discussion these days).

Thank you for anything you’d like to share.

  • LLMs behind these tools likely trained on data taken from OSS authors without their permission, and without attribution ~ do the benefits of these tools for OSS outweigh this?

  • LLM-driven-development may make it harder for the less-experienced among us (myself included, when it comes to Elixir/Phoenix) to learn our craft and build valuable experience ~ this is highly individualistic, but industry forces and corporate influences cannot be ignored when it comes to the pressures on the budding software dev in 2025.

I know there are other areas of ethical complexity with these tools, but these are the two I’d like to focus on for now. I’m less interested in debate here; mostly just looking for a solid set of data points from members of the community. All thoughts and opinions are welcome.

~

Hi all, I’m passionate coder, tinkerer, and creator who’s spent a couple decades of their career focused on web dev through JS (now TS) and professionally, Python on the backend ~ it just keeps happening.

I have a strong interest in and alignment with OSS ideals, even having had the opportunity and satisfaction of making a few notable contributions myself over the years.[1] I’ve experienced the frustration and joy of pouring hours into OSS project maintenance, fixing bugs reported by eager (occasionally annoyed) folks using your project, for no real reason you can explain to yourself other than “it’s kind of fun.”

Over the last two years, I’ve nurtured a voyeuristic passion for Elixir and Phoenix ~ probably a result of all too much time spent working with JS-fueled hacks- I mean frameworks- intended to paper over limitations of the web platform. I guess one could describe Phoenix Liveview this way, but as someone who’s lived and breathed most of what JS has to offer, I’ll say this websocket-centric approach feels like the right way to hack the limited capabilities of the browser into something new. It’s truly incredible, and I dream of being so lucky as to work professionally with these tools someday.

Last year I had the pleasure of working through George Arrowsmith’s excellent Phoenix course[2] with the twist of interpreting the HEEx into Temple[2] for fun. This might be controversial, but it felt to me like the best medium for expression of UI I’ve ever witnessed ~ sometimes you just gotta try something because it seems cool. The results of that experiment can be found below[4] if anyone is interested. I even had some fun and built a couple of Hex packages in the process[5][6] though progress has slowed since then (becoming a parent; its own set of challenges/adventures).

I’ve sensed a lot of enthusiasm for AI-driven software tools in the Elixir community of late, which mirrors the strong enthusiasm that seems to be taking hold across our field. I have no commentary here ~ to each their own and like most of us (I assume) I’m just trying to adapt to the increasing levels of change around us with as healthy an approach as I can figure out how. My natural tendencies have led to a natural skepticism of AI, but I also put high value on an open mind and supporting the creativity of others’ through whatever methods they personally find to be the most enriching.

I haven’t personally used LLMs much in my professional work, but pressure to do so is definitely increasing by the month (and even week). My small company has an upcoming “AI Hackathon” where I had an idea – combine my interests with those of leadership, and take the opportunity to go all-in on AI tools to rebuild part of our product using LiveView. I’m sure I won’t learn as much about Elixir & Phoenix as I would building the old-fashioned way ~ or maybe I’ll learn more, who knows! In any case, I only have a week.

Repeating my intro paragraph from the start, I’d like to know ~ how do you all feel about the use of AI tools for building Elixir/Phoenix projects, with respect to these topics?

  • LLMs behind these tools likely trained on data taken from OSS authors without their permission, and without attribution ~ do the benefits of these tools for OSS outweigh this?

  • LLM-driven-development may make it harder for the less-experienced among us (myself included, when it comes to Elixir/Phoenix) to learn our craft and build valuable experience ~ this is highly individualistic, but industry forces and corporate influences cannot be ignored when it comes to the pressures on the budding software dev in 2025.

I know there are other areas of ethical complexity with these tools, but these are the two I’d like to focus on for now. I’m less interested in debate here; mostly just looking for a solid set of data points from members of the community. All thoughts and opinions are welcome.

Thank you for anything you’d like to share, and for all contributions you’ve made to this vibrant and welcoming community. Learning through lurking here has been a blast, and your effort and enthusiasm for these tools over the years has made this amazing ecosystem what it is today.

~

[1] GitHub - evnp/tmex: Lightweight tmux cmd/layout composer · 1 shell script · 0 dependencies exc. tmux
[2] https://learnphoenixliveview.com/
[3] GitHub - mhanberg/temple: An HTML DSL for Elixir and Phoenix
[4] cocktails.coffee/lib/cc_web/live/realms_live/world_map.ex at main · evnp/cocktails.coffee · GitHub
[5] regex_formatter | Hex
[6] unique_words_sigil | Hex

No, since they have also paid plans. I don’t believe a BigTech would respect OSS and don’t use any information from it in their paid plans. :-1:

Companies tries to force LLM to do everything like it’s a real “AI” which only ends up with frustration as the user expectations are not met. The best “AI” tools are the old tools based on the solid algorithms with only changed input method (chatbot).

I believe that if people would really focus on serious research then LLM may be excellent for learning new things and improving skills. However we don’t live in a world full of altruists that spends billions just to help other people.

So far the companies are trying to fire developers that “demand too much” while still expecting thousands if not dozens of thousands for their hardware (not for cloud/server purposes of course). It’s like expecting from gamers to use netbooks and using an LLM to “replace demanding gamers”. :man_facepalming:

We have lots of models in lots of variations. Let’s even assume that the original author have paid for using OSS projects, but what with others? Do you really think that you have full copyright for what LLM produces? Most people are not reading the terms of use. Those who read them most likely don’t have time to follow it’s every update.

If you don’t have a law team it’s terribly hard to tell what are your rights as everything may be changed. Do we even have a law regulations for that? Depending on law system we may not even know if the chatbot creators have a responsibility to inform us about a problem. If we are safe can we say same about ≈200 other countries? Do all other countries respect our copyrights?

It’s one huge and frustrating mess that is basically out of control. What stopped corporations from moving factories to China? Nothing? Then what’s stopping them from moving “AI teams” if our law becomes “too restrictive”? As always the biggest evil in a new tool is their owner/user.

1 Like

I have some intellectual property experience. I’ve never been persuaded by this point. Legally I don’t think it’s infringement. Morally, I don’t see how it’s against the spirit of the MIT or most OSS licenses.

2 Likes

I’m finding the opposite: I’m ultra-senior but new to customer-facing Phoenix and completely new to LiveView.

It’s been a fantastic learning experience. I think this might depend on one’s learning style. Maybe also how one came to be a programmer. I’m self-taught beginning when I was 13 or so. EDIT: And then 4-year college Computer Science and a bunch of grad school.

1 Like

I don’t think there is a single restrictive license that could be applied to any other and even if … are you licensing your code with it?

Do you know original authors or original license? Assuming it’s MIT it’s like I would assume the world exist to fit my own needs and everyone is supposed to help me achieve what I want. How does it sound that way?

Look that it’s not “AI”, but LLM. Why does it matter? It’s a Large Language Model - would it really surprise you if 2 data from sources that have conflicting licenses are used to generate a single solution? You don’t even have to worry about your own license if the code standalone is completely illegal.

For sure you don’t “trust” the “AI” source as same as with license, right? If you did then you’ve learned a lot of fake information. How can we even trust a tool that cannot correctly do simplest tasks without hallucinations?

I really can’t understand how people could blindly assume the best case all the time and ignore possible consequences. Imagine the case that a person that is known from TV as a thief says: take this money and trust me they are from a legal source. Could you be so sure that you are safe in the court? Or is it maybe kind of “low risk, no problem” idea? Now let’s play with words and say that originally the source was legal until the money were stollen … I don’t see even a single line of defence here …

I’m worried that this hype becomes too dangerous and therefore could lead to unpredictable things. In worst case when people would stop questioning things then we could end up with any progress. If that would happen globally then we would have even a technological regression in the long term. :fearful:

1 Like

In my experience, the tool can correctly do the simplest tasks.

How can we trust co-workers to do the same?

You may want to have a look at some of these threads too:

You’ve sort of answered this in your question (bit in bold). Most OSS contributions were made before AI existed, and if we look at what’s going on in the wider space we see there are numerous lawsuits against AI companies for essentially stealing other people’s work (lots of threads relating to this on Devtalk). It’s a bit different with OSS since most OS licences pretty much allow anyone to do whatever they want - tho it could be argued that the intent was significantly different back then - and that those licenses were more to offer ‘free to use’ type rights while offering guarantees in case the maintainer/s died or lost interest or changed their minds.

I expect the pushback from OSS will come a little later than say with artists, musicians and other industries which are being impacted more severely right now - because the impact in the dev industry may be felt a little later, through job losses (tho it’s already started) or AI companies essentially killing off languages once they’ve used them to get where/what they want.

Many would argue we should exercise caution - history tells us that big tech/capitalism on that level simply cannot be trusted:


There will probably be pros and cos, though some early indications are not good:

Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT’s Media Lab has returned some concerning results.

The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI’s ChatGPT, Google’s search engine, and nothing at all, respectively. Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.

It may be too early to tell - one thing may well be a given though, we probably don’t want to walk into the age of AI blindly or without caution…

2 Likes

Thank you all for the thoughtful responses. I’m actually very encouraged to see such a diversity of viewpoints right off the bat. Even if there are no clear answers (which is not a comment on the quality of discussion here!) knowing that these perspectives are strongly represented in this community creates a stronger draw for someone conflicted like myself to engage with it.

It may be too early to tell - one thing may well be a given though, we probably don’t want to walk into the age of AI blindly or without caution…

Well said. Both enthusiasm and skepticism are essential, for a community which hopes to evolve in the right directions. I wonder if the same can be said of an individual.

I find myself in agreement with much of what you’ve said @Eiji, and I unfortunately am finding it increasingly difficult to reconcile these beliefs with the daily life, influences, and pressures of a working software dev who’s main priority in life (at present) is to provide for their family. I’m finding myself gradually start to ask questions, “do my own personal principles really matter, if something is already here, it exists, and perception of my success in my job seems increasingly predicated on embracing that thing?” I don’t think my ideal world would have AI in it, but I have to accept that other’s have different experiences, feel differently, and none of us live in our own personal ideal worlds. At the same time, principles matter, ideals matter, I have immense respect for that. We should all do what we can to influence the world around us for the better.

@dogweather Thanks for weighing in, I’m glad you’re finding valuable experiences in working with these tools. It’s fascinating how disparate reports seem to be with this ~ learning style does seem highly relevant. If you did feel like elaborating on that with any further thoughts on your personal experiences, I’m all ears. What aspects of a learning style, or one’s approach, might make for better (or not) growth while using LLMs?

@AstonJ Appreciate the broad perspective and summary of the state of play. Thanks for compiling those links, will definitely be reading in depth. Very interesting study as well, I hadn’t come across anything that went as far as EEG analysis. There’s also the recent METR study showing programming tasks taking longer with AI assistance. Again, so incredibly perplexing how these will likely go against the reported experiences of many around us (numerous colleagues on my part, not to mention so. much. discourse online). I am getting the sense that LLM-assisted/driven software development may be a different beast entirely than generative-AI tech applied in other areas, for better or worse – not to say that they don’t share many of the same pitfalls and moral quandaries. What disturbs me most is the relative decrease in discourse I’m sensing across the board about topics like

LLMs behind these tools likely trained on data taken from OSS authors without their permission , and without attribution ~ do the benefits of these tools for OSS outweigh this?

It feels like maybe we just got collectively tired of talking about this, but the more powerful these models get and the more enthusiastic their proponents become (nothing against them) the more important it seems to me to keep these ethical complications in mind, rather than trying to ignore them or put them behind us. I’m feeling more and more compelled (sort of against my will) to engage with these tools than ever, and I’d rather things be complicated if that’s where we’re headed.

2 Likes

I’m thinking about my answer for you.

But a quick question: you haven’t had good learning experiences working with AI?

1 Like

Thanks @dogweather, very interested to hear more. Sorry about the delay, that’s a great question. As you’ve probably gathered, I’ve so far fallen mostly on the skeptical/reluctant side of these issues, but I do work for a company that has recently developed plenty of AI enthusiasm and multiple AI products (bolted on, not the core offering). So full disclosure, I have only a surface-level familiarity with these tools and integrating them into my work and learning processes. I understand there are many “prompting skill issues” and “using the wrong model” issues when it comes to AI; I don’t assume I can use it well.

My main context here comes from observations of co-workers, working in two modes – we can call them “supercharged autocomplete” and more recently “hands-off agentic coding”

“Supercharged autocomplete” (generally via Copilot, Cursor)

  • What concerns me here is an observed decrease in methodical thinking and working, possibly due to increased distraction/noise while coding.
  • While collaborating with colleagues operating in this mode (remotely, watching their screenshare) I observe more “throwing spaghetti at the wall” rather than building understanding down to the details at the syntactic level.
  • In my own learning process over the years, I’ve gradually (too slowly) developed an intuition that working methodically is essential, to continuously “lock in” progress on some problem and avoid “circling back on your own path.” I feel my colleagues have had more trouble getting lost while trying to figure out tricky bugs and problems lately.

“Hands-off agentic coding” (Cursor, Claude code)

  • The tradeoff here seems obvious ~ you’re no longer working directly with the code. My peers have told me this lets you operate on a higher level, you still need to understand the code, you still apply rigorous code review. Regardless of how effective this may be at scale (of weeks to months to years working with a codebase) I’ll take their word for it, but the bigger concern I have is that the process of code review has always seemed more like a “hole poking” rather than “deep understanding” exercise to me. Speculatively, I don’t believe I can learn and grow in my craft through code review alone. This recent mastodon thread that’s been circulating has some interesting parallels, providing a “red/blue” perspective on LLMs and the code review process.
  • My experience has been that I learn best when tinkering and experimenting, building up a system from the smallest of parts and interacting directly with it through as-fast-as-possible feedback loops along the way. Give me a REPL on an airplane and I’m happy as a clam trying to figure something out with limited resources; give me a set of docs to read through and I’m miserable (though good docs are essential ~ when paired with hands-on activity). Similar to writing out something by hand solidifying it in your memory, working directly with code in live operation seems to help me internalize it and remember how it all works naturally. These low-level experiences feel important.
  • On a tangential note, in the web development space I’m not at all happy about the movement towards the “path of least resistance” for building UIs with LLMs ~ they seem best at working in a Tailwind-centric style which tends to produce an HTML “soup” very difficult to inspect and make sense of any structure through inspection from the browser, due to lack of any meaningful terms associated with DOM elements. Part of the original promise and gift of web technologies was that you could look under the hood and figure out how everything worked ~ this was a boon when my initial interest was sparking and I was starting to learn. This aspect of the web was eroding prior to LLMs, but I think they’ll act as a serious accelerant; I’ve already started to see arguments for coding styles, practices, and technologies that cater to LLM involvement as a priority over human learning and understanding.
2 Likes

As someone who’s written million lines of OSS code I have some perspective here. not all OSS is the same. My projects are made available under the MIT license. I have given my permission to use it for whatever purpose you or anyone else likes. Intentionally.

But as for the ethics here, the things that were wrong and illegal are still wrong and illegal and we should pursue companies to that end. I’m more than happy to see investigations and lawsuits for using copyrighted material, and I hope that any found infringements are punished substantially. The individuals, and the company.

For this one, honestly it’s just hard to say and I haven’t seen convincing evidence one way or the other on how it will impact things. There is compelling evidence in both directions that actually using an LLM to learn will or won’t help etc. But “LLMs existing while you learn” is not something I’ve seen measured. It could perhaps have a negative effect via encouraging bad habits in beginners, but also beginners have always had bad habits and reality is what trains it out of them. (i.e you’ll get fired or reprimanded for bad work or slop, and thus have to invest time learning things). Very up in the air for me though.

3 Likes

First issue: our context

What you write makes a lot of sense. I am so very happy I’m not working at a company right now — where last year it was Scrum as usual, and Jira for better or for worse. I’ve wondered about people in your situation and how chaotic it must be for you. In my experience, 80% of software development shops run on denial and favoritism. I’d bet that the introduction of AI Assistants bring the weaknesses in many organizations to the front.

In contrast, I’m working for myself. Not contracting at the moment but growing my startup that makes the law easier to read. I’m also working on a content intelligence service for large content publishers. These pay my mortgage but not much more. I’m focussed on growing my income to replace a full-time salary. You can probably see why, with that goal, AI is a blessing. I think I’m a great coder, but I have weaknesses in web design, marketing, and business planning. AI fills in the gaps for me. It’s also great as a pair programmer because it’s better than a literal rubber duck, and tbh most of the colleagues I’ve had in the past.

But for sure, AI is a really mixed bag and that’s why I’m so happy not to be on the job somewhere debating and coping with new processes. AI coding assistants are evolving so quickly, I’m constantly re-thinking my working style.

Second issue: coding vs. managing

Yep, I bet this is how people feel when thinking about “going into management”. That’s the role we take on when we work with the hands-off coding.

Now for me, I’m fine with the tradeoff because my goal isn’t to build an app or learn to code better, it’s to build a business. I’ve always wanted to learn to delegate better, and be a good manager. I’ve begun hiring people (contract) and so I’d already been working hard on writing good engineering documents. All of this fits naturally into working with AI.

For example, I know a gal who’s graduating in InfoSec and I’ve begun hiring her for security reviews. I’m planning on a new project where I (and ChatGPT) teach her enough coding to be able to implement her recommendations.

1 Like

@evnp I was first thinking much more broadly about AI and wondering, “How can you not learn from it?” Here are a few examples from my daily life:

…and using the ChatGPT app on my iPhone while shopping, it can read the crinkled nutrition labels that I photograph for it…

1 Like

IMO the best way to address AI-anxiety is to learn the tools really well, and to push for open-source solutions where possible (models, hardware, prompts, agents, etc.)

AI tools for Elixir have made great progress in recent months! Tidewave, UsageRules, AshAI, Hermes, etc. Everyone should learn these, and should run local models with Ollama and Nx. OpenCode.AI is a great model-agnostic coding TUI…

Way back in olden days, computing was Timeshare controlled by corporate. Then came PC and OpenSource that we love. With big AI models, we’re back to Timeshare. But already there are great open LLMs, and the cost/complexity of local GPUs will come down. I bet that local autonomy won’t die.

5 Likes

totally agree. If we replaced AI in this debate with a physical intelligent robot that went to college and studied lots of code. including groking a lot of open source projects nobody would complain, just like nobody complains about humans doing that (In fact many OSS authors would be delighted with their source code being set as study homework). If it cut and paste a ton of code and claimed it as its own then yes, thats bad. But thats not what is happening.

1 Like

Yeah. I’m mildly interested in hearing someone steelman the other side:

“We didn’t give our permission, although we permissively licensed the work, because …?

1 Like

@AndyL That is a fascinating perspective, eye opening to me. I think being immersed in the particular hype-environment shaped by my peers in a professional setting has been limiting ~ the focus so far being mostly on the “best of the best” corporate offerings for code generation, rather than investment in what not-for-profit, non-pay-to-play projects have to offer. This viewpoint (from another thread, linked above by @AstonJ) is persuasive and dystopic to me:

and a few paragraphs later

From this perspective, engaging with these technologies feels essential, despite any ethical complexities or concerns. Idealistically sticking to the status-quo and attempting to practice the craft as if AI didn’t exist starts to feel like a “head in the sand” approach. Thank you for the project recommendations, gives a great starting point for exploration.

@dogweather Thanks for your thoughts and experiences on all this. Public Law is looking like an incredible project! Perfect example of why these tools shouldn’t be ignored ~ if they lead to the creation of projects like this, which (I’m speculating based on what you’ve said) may not exist otherwise, ignoring them would feel basically against the “greater good.” You’ve also helped me understand why the recently evolving landscape is empowering to individual builders, something that is extremely appealing. As much as I may be concerned for the future of my profession, if there’s a silver lining in that independently-developed projects become more feasible, that’s a strong balancing factor (and a strong reason to engage with this genre of tools now rather than later). On your question about learning from AI in everyday life, your examples are eye-opening and perhaps I’m just somewhat a luddite. I try (and have constant trouble) detaching from laptop/phone when I’m not actively working (professionally or creatively). The ability to scan nutrition data is particularly compelling though.

@zachdaniel Thank you for your perspective! And also for Ash ~ it’s been on my radar, fascinating project which I’ll definitely be taking for a spin one of these days (though, probably alongside Phoenix, and establishing some strong foundations there first has seemed important). Agreed this all feels up in the air… part of the disorienting effects of the current landscape. I’m a bit more worried about beginners feeling pressured to lean into AI tools short-term (pressure which I’m already starting to feel in various ways), which ends up forming the bad habits they find hard/impossible to shake long-term. The reality of what you’re being told to do right now feels more immediate than the future repercussions, which are hard to quantify or maybe even see at all. I can see why it may be in the corporate interest to push for increased AI usage, but I’m still unconvinced it’s in the interest of us individuals looking for personal growth as well.

Hey, I’m glad to spread some of the positive aspects, and concrete amazing possibilities I’ve discovered.

I think that abstractly, those use cases — learning about quantum physics and good nutrition — are just glorified web searches. (And implementation-wise, that’s precisely what they are.) I now believe that the web-crawler-based-web-search is a real stop-gap, bad way to find what you’re looking for:

You need to learn something: maybe understanding how to have both good heart health as well as a good protein-based diet that avoids cancer-causing foods. And so what do we do? We type in some hopeful version of that thought into a Google and then begin scrolling links to web pages with the hope that somebody with proper credentials, authority, accuracy, and lack of bias, has memorialized in writing the exact thing we’re seeking to learn.

It can be super-tedious as we all know. So I see this particular use case for AI as being closer to a semantic search, or what we’ve been really trying to use Google/Bing for all along. I believe that nobody searches for a web page. We’re really searching for information — knowledge. And this AI use-case gets us closer, so it’s a no-brainer that people would use it.

1 Like

@evnp I guess I have a lot to say on this subject. :slight_smile: Maybe because I’ve used earlier AI technology in several projects: logic programming (Prolog), rule-based (Clips), etc.

Another strong belief I have that separates what LLMs do well vs. not: convergent vs. divergent thinking. I learned about this dichotomy in law school. Part of the dystopian legal thinking training is realizing that convergent thinking is what it’s all about. Stop being creative. E.g.:

The crime of murder has four elements. Do we have enough evidence of each to prosecute the suspect? Don’t get distracted by “motive” - that’s not an element!

Versus divergent thinking, which is what artists, creatives, and programmers do.

I believe that LLMs are particularly good at divergent thinking (“Write me a sonnet about emacs”) vs. convergent (“What is the API for Elixir 1.8 String?”) I.e., an open-ended infinite solution space vs. a small, finite one.

So the trick for us developers, if we want to use LLMs well, is to tap into its strength: divergent thinking, but in our world which is a somewhat scoped solution space.

I can demonstrate with another example where the LLM was a fantastic help. If you’re into languages, culture, and linguistics, this will be interesting:

Creating a brand name taking into account: Mandarin, Japanese, and romanizations.

Me, working with ChatGPT on an app for Asian markets:

Japanese relies on traditional Chinese characters. There must be some of these for which the original pronunciation is the same. Are there words that are in both Mandarin & Japanese that could be played on?

ChatGPT 4o:

Yes — absolutely. Mandarin Chinese and Japanese share a huge corpus of characters (kanji / hanzi) with common meanings and often similar pronunciations. These “cognate compounds” are a goldmine for cross-cultural branding because they evoke shared meaning across languages while sounding familiar to native speakers of both.


:input_latin_letters: How This Works

  • Shared characters (漢字 / 汉字): Japanese Kanji were imported from Chinese, and many retain similar meanings.
  • On’yomi readings in Japanese (音読み) are derived from Chinese pronunciations, particularly Middle Chinese.
  • Thus, some two-character compounds (熟語) are:
    • Semantically identical
    • Pronounced similarly
    • Intelligible to educated users in both regions

:globe_with_meridians: Cross-Readable Words with Potential

Hanzi/Kanji Meaning Mandarin (Pinyin) Japanese (On’yomi) Comments
開発 Development kāi fā kaihatsu Used in software/dev context in both
技術 Technology / Skill jì shù gijutsu High-tech resonance
情報 Information xìnxī (Mandarin uses 信息 more now) jōhō Still mutual
未来 Future wèi lái mirai Broad, forward-looking brand tone
学習 Study / Learning xué xí gakushū Strong educational branding
人工 Artificial (as in AI) rén gōng jinkō Shared stem of “AI” (人工知能)
知識 Knowledge zhī shí chishiki Classic intellectual appeal
初心 Beginner’s mind chū xīn shoshin Zen/dev crossover, poetic
快楽 Delight / Pleasure kuài lè kairaku More playful branding — could be edgy depending on context
発見 Discovery fā xiàn hakken Suitable for “tutorials,” “new tricks,” or code exploration
工房 Workshop gōng fáng kōbō For a cozy, maker-oriented tone

:input_latin_lowercase: Potential Brandable Names

Here are a few blends or rewordings:

:test_tube:

KaiGaku.org

From 開発 (kai) + 学 (gaku), “Develop and Learn.” Easy to say for Chinese and Japanese speakers.

:brain:

ZhiGaku.org

知 (knowledge) + 学 (learning). Classical, high-trust tone.

:rocket:

WeiGaku.org

未来 (wei/mirai) + 学 (learning) — “Future Learning.”

:red_paper_lantern:

ShuShin.org

初心 (“Beginner’s Heart”). Evokes Zen and eternal studenthood.

:wrench:

GongLab.org

工房 or 工 (gong) + Lab — simple, modern, recognizably dev.


:warning: Important Notes

  • These names feel exotic but elegant to English speakers.
  • For Japanese users, they often sound “classic” or even poetic.
  • For Chinese users, they may sound a bit more formal or academic — but trustworthy.

If any of these speak to you, I can help generate additional variations, check domain availability, or even write a “meaning” paragraph to include on your future site.

Want to riff on these more? Or look into Latin+Sino blends?


Me:

Yes, I guess the puzzle is made more difficult because the romanizations should be homonyms.

ChatGPT:

Exactly — and that’s the real trick here: finding shared Chinese-Japanese words where the romanizations are phonetically similar enough to be recognized by both audiences and still readable to English speakers .

Let’s dissect this constraint and then give you some viable options.


:puzzle_piece: Constraints for a “Perfect” Cross-Cultural Brand Name

We’re aiming for:

  1. Shared Kanji/Hanzi – common meaning and concept.
  2. Similar Romanized pronunciation – e.g., kaihatsu (JP) ≈ kaifa (CN) is close but not perfect.
  3. Memorable and spellable to English speakers .
  4. No unintended meanings in either language .

:bullseye: Candidate Concepts With Pronunciation Closeness

Concept Characters JP Reading CN Reading Romanization Similarity Notes
開発 kaihatsu kāi fā Medium “Kai” is shared; second syllable differs
技術 gijutsu jì shù Low Sounds are quite different
学習 gakushū xué xí Low Visually the same, phonetics diverge
未来 mirai wèi lái Low-Med -rai and -lai are close
知識 chishiki zhī shì Medium “Chi”/“Zhi” closeness
初心 shoshin chū xīn Medium Similar cadence; “sho”/“chu” close in tone only
工房 kōbō gōng fáng Medium Both mean “workshop”; not a phonetic match
発見 hakken fā xiàn Low Semantically excellent; phonetically far
人工 jinkō rén gōng Low-Med “Jin”/“Ren” mismatch, but close syllabic structure

:test_tube: Best Phonetic Compromises for Branding

Let’s go full-brand-mode and generate romanized names that balance:

  • Clarity to English readers
  • Echo of Chinese/Japanese phonetics
  • Available as .org or .io domains

:seedling:

KaiGaku.org

  • 開発 + 学 (develop + learn)
  • Pronunciation: JP: kai + gaku | CN: kāi + xué
  • Appeal: Sounds Japanese, “Kai” is Chinese and universal; feels positive and learn-y.

:brain:

Shoshin.org

  • 初心 (beginner’s mind)
  • Pronunciation: JP: shoshin | CN: chūxīn
  • Appeal: Very Zen/dev crossover. Already recognizable in mindfulness circles.

:wrench:

Kobo.org

  • 工房 (workshop)
  • Pronunciation: JP: kōbō | CN: gōngfáng
  • Appeal: Elegant, short. Good for a site that feels handcrafted.

:magnifying_glass_tilted_left:

MiraiLab.org

  • 未来 (future) + lab
  • JP: mirai | CN: wèilái
  • Appeal: Globally futuristic, lightly Japanese. Strong potential if you lean into developer education.

:light_bulb:

ZhiShin.org

  • 知心 or 知新 (know the mind / learn new)
  • JP: chishin | CN: zhīxīn
  • Appeal: Sounds personal and intellectual. Easy to brand.

:gear: Final Thoughts

While you won’t find perfect phonetic unity , you can land in the sweet spot: visually familiar kanji + romanticized spelling that feels right in both Japan, China, and the West .

Would you like me to:

  • Check availability of kaigaku.org, shoshin.org, zhishin.org, etc.?
  • Generate more brand ideas using this approach?
  • Build a one-line explainer to add cultural depth on the site (e.g. “KaiGaku: 開発と学習 — Developing and Learning”)?

Happy to go as deep as you like.

1 Like

As the author of those quotes, I mostly agree – but not quite.

LLMs peak quite quickly if you are a programmer who knows his way around. I actually dialed back my LLM usage in the last 2-3 weeks because I felt myself more and more tempted to just reach for them and not think. I found myself once doing a 4h session during which I did only the most surface amounts of analysis and just impatiently waited for the LLM’s response.

That spooked me. Also because I knew I could do it in less than 4 hours.

So while the LLMs are an important part of one’s workflow, that only goes for the people who had the analytical and the get-it-done skills in the first place. LLMs are actually quite destructive for a lot of other people’s critical thinking abilities – me and many others have witnessed it first-hand.

2 Likes