Which new computer language for fun?

fitten

Ars Legatus Legionis
52,251
Subscriptor++
So... I was thinking about learning a new computer language, purely for fun. It doesn't look like I can make a poll in this forum so I'll just have to deal with it. I have programmed in C, C++, C#, various assembly, Java (not that much, though), Python, JavaScript/Typescript, and some unmentionables like Perl and Fortran and even far back in the past LISP, Prolog, ADA, and some others. I'm thinking of picking one that might actually be useful outside of just playing around so my first thought was maybe Rust or maybe even Zig?, but open to others... Go maybe? The only real requirement is that it be available on Linux (I'm using Ubuntu) so that . My plan is to just play around doing various tutorials I can find on the 'net so a language with a good set of (free) tutorials and maybe a good community would be a plus.
 

ImpossiblyStupid

Smack-Fu Master, in training
81
Subscriptor
The question I always ask when considering a new language is what it actually offers me beyond some syntax variation. What kinds of things don't you do in the languages you already know, or what do those languages simply not do well for you?

If you haven't done significant Javascript in a while, you might want to revisit it and see just how much it has changed.

If you like Perl, maybe give Ruby a look. Or possibly Lua.

Maybe pick a problem that interests you and head on over to Rosetta Code to see which languages give you solutions that you find particularly elegant.
 
  • Like
Reactions: fitten

Apteris

Ars Tribunus Angusticlavius
8,938
Subscriptor
purely for fun

[..]

I'm thinking of picking one that might actually be useful outside of just playing around
Pick a lane. :p

I don't know, only you can say what is fun for you. I found LISP incredibly appealing once I understood how it worked, way back in university. Same goes for Haskell (except I still don't understand it, but it has the same feel) and F# -- people who use them swear by them.

There's a code golfing language called Uiua which puts its symbol alphabet on the very top of its homepage for people to copy and paste, because God knows you won't find those symbols on your keyboard, that looks pretty fun too.

Or Rust, of course, it's interesting too, and is being used more and more. You won't have trouble finding interesting projects to work on, if you're into that sort of thing.
 
  • Like
Reactions: mir-teiwaz

koala

Ars Tribunus Angusticlavius
7,579
My first recommendation would be Prolog, but you said you already did that. I might suggest that when I did Prolog in University I didn't learn about DCGs. I played recently with them and it was an eye opener (it turns Prolog into a very nice parser). So I would recommend Prolog only if you didn't do much with it, or you didn't do DCGs and you are interested in parsing.

My next recommendation would be an ML, because you list none. C#, Haskell, SML, OCaml... they all offer interesting things (.NET integration, the most purism and bizarre libraries with Haskell... and the interesting OCaml ecosystem).

I think Kotlin is nice- and many non-Android stuff supports Kotlin (e.g. Spring Boot has Kotlin support), but I don't see it as interesting.

Rust is awesome- and it nearly counts as an ML. But it really depends on how much you want to stray into the C/C++ side of the world (you mentioned you've been there, done that).

Other strange recommendations: Janet (Lisp with built-in PEG support), Pascal (I'm intrigued about the current status of the RAD ecosystem), Swift/Dart (they are both very interesting), Erlang (the Erlang paper is awesome), Smalltalk (I interviewed for a company that was using Pharo in this day and age)...
 

fitten

Ars Legatus Legionis
52,251
Subscriptor++
The question I always ask when considering a new language is what it actually offers me beyond some syntax variation. What kinds of things don't you do in the languages you already know, or what do those languages simply not do well for you?

Yeah, this is probably my biggest hurdle with this. If I had to learn something for work, that'd be different, but I was thinking last night that mostly what I'd be doing is just learning new syntax.
 

Ardax

Ars Legatus Legionis
19,076
Subscriptor
Yeah, this is probably my biggest hurdle with this. If I had to learn something for work, that'd be different, but I was thinking last night that mostly what I'd be doing is just learning new syntax.
That's exactly what you'd be doing at this point. But too many employers seem to only want to hire people that already have experience in a particular stack, and Rust and Go seem to be ascendant.

If you've got a strong network that can get you around the initial HR filters and onto the top of the pile to get an initial call or even an interview with the hiring manager, then it may not matter.
 

koala

Ars Tribunus Angusticlavius
7,579
I suggested any ML because if you've never done sum types, etc., that's more than "syntax". Just like Prolog/DCGs can expand your mind beyond syntax (if you are not already familiar with them).

Even diving a lot in, of all things, SQL, can be very revealing.

But certainly, doing Kotlin when you are already familiar with Java, for example, I don't think will have the same effect.
 

Apteris

Ars Tribunus Angusticlavius
8,938
Subscriptor
Yeah, this is probably my biggest hurdle with this. If I had to learn something for work, that'd be different, but I was thinking last night that mostly what I'd be doing is just learning new syntax.
You could learn an esoteric programming language, if that sort of thing strikes your fancy.

Velato, for example, is a language which uses MIDI files as source code.

the pattern of notes determines commands. Velato offers an unusual challenge to programmer-musicians: to compose a musical piece that, in addition to expressing their aims musically, fills the constraints necessary to compile to a working Velato program. Each song has a secret message: the program it determines when compiled as Velato.

Piet, on the other hand, uses blocks of colour as syntax, so that the end product -- the program -- looks like an abstract painting that might have been painted by Piet Mondrian.

I mentioned Uiua above, which certainly looks esoteric when used, at least, even if in actuality it's a "typical" stack-based code golfing language.

Or you could learn to use the Wolfram Language, which is less a language (though it is that) than an interface to Wolfram's database of facts and ontologies.

---

It's really a question of what you want to learn. Consider the following (thoroughly non-scientific and not exhaustive) pyramid of knowledge:

7. how to write good programs, how to write programs efficiently
6. frameworks, libraries, toolkits
5. programming languages
4. algorithms, data structures
3. operating systems, networking, databases
2. computer hardware
1. electrical engineering
0. classical and quantum physics

Concepts at layer n rely on those at layer n-1, and therefore indirectly on everything below themselves. (Also it's not really a pyramid, it's a graph; programming languages et al. obviously rely on theoretical computer science, and theoretical CS is a branch of mathematics and as such not limited by the constraints of our physical universe. But let's set all of this aside for the time being.)

So, consider which one is most appealing to you. Even "just learning new syntax" is educational and valuable, if the new syntax in question is sufficiently different from that which you already know. But you can also choose something farther afield.
 

MilleniX

Ars Tribunus Angusticlavius
6,767
Subscriptor++
I'd second Rust because it moves the industry where I want to see it go, but that's a poor argument for something new you're doing for fun.

If you want mind-expanding in a different direction that others have mentioned so far, you might try out GPU programming with CUDA, HIP, or SYCL. They're going to be more or less extensions of C++, but learning how to program an explicit vector computer architecture and get good performance out of it presents some interesting challenges.

For note, if one were planning on using a GPU for serious work, I'd strongly recommend using a portability library like Kokkos instead of the vendors' own languages.
 

fitten

Ars Legatus Legionis
52,251
Subscriptor++
A few years ago I picked up some books on parallelism and dug into the pthreads library. It changed how I think about code more than just learning a new language ever did. Of course parallelism is built into many newer languages already, but IMHO these tend to restrict when/where the mutexes are.

Yeah. Parallelism is something I'm familiar with. I've done a bit with SQL as well. I still haven't decided what, if anything, to do. I was also thinking a little about something that's very cross platform (don't want to do anything in JavaScript/Typescript... have had enough of that for a while and I'll probably be hip deep in it again before long) but it seems that there's not much else besides JS that is. :(
 

ImpossiblyStupid

Smack-Fu Master, in training
81
Subscriptor
Of course parallelism is built into many newer languages already, but IMHO these tend to restrict when/where the mutexes are.
What I've had great fun doing is taking whatever native threading mechanisms exist and incorporating them into higher-level concepts, like message passing, and then mixing those in with things like networking. So, for example, you could implement an iterator that operates with threads and/or on multiple machines, which keeps your code itself clean while also allowing you to scale it along different dimensions simply by switching out the iterator. Pick a language with a flexible enough runtime and the sky is the limit.
 

ImpossiblyStupid

Smack-Fu Master, in training
81
Subscriptor
IME that scales very poorly. Attempting to hide RPC mechanisms results in badly behaved and inefficient systems that actually scale badly.
Disagree. What's inefficient to me is a modern 10+ core system that has almost nothing that uses more than 1 or 2 cores, simply because the mechanisms to take advantage of all that horsepower are at too low a level and require you to fundamentally rewrite/debug all your code from scratch. Anything that abstracts independent operations and allows you to decouple from those dependencies is inherently a win. It's fun to bend your brain to think that way by playing around with functional languages, but it can be just as interesting to see how easy it is to do that sort of thing in languages you already know, and experiment with the ways it can influence your coding style.
 

ShuggyCoUk

Ars Tribunus Angusticlavius
9,975
Subscriptor++
Local sure, remote no.

Something that causes you to actually restructure a task to create better parallelism sure. Something that just hides high latency operations with non trivial chance of failure/timeout no.

To discuss the former you need examples lest someone assumes the restructure assists a task they are trying which it fundamentally can’t.

TNSTAAFL
 
  • Like
Reactions: MilleniX

koala

Ars Tribunus Angusticlavius
7,579
I know the theory, but in practice I really don't see much API use which ends not being RPC-like.

In the end, most popular APIs give you an API client for your language, which looks like this:


, or you end up writing your own client, which means your API call ends up being a function call in your code. That is, RPC, even if people use REST because RPC is evil...

It is true that:

  • Some popular APIs are less RPC-ish. The first one I checked was the Google Calendar API (the last one I played with), which was less like the GitHub API example above
  • That many popular APIs expose you to pagination. But what I see all the time is people just looping over all results (which makes me think: is that really better for anyone than having an API with bigger pagination limits?)
  • Service meshes encapsulate some of the more sophisticated logic around network APIs, but it is my impression that service meshes are not that common?

I also have not been exposed to "proper" microservice architectures, where there's all that talk about patterns such as circuit breaker, but...

I think in practice API calls do not fail all that often. Even improperly-run services tend to have 99,9% availability, and the use of an API is often "necessary", so if the API breaks, it's fine for you to break too.

And then that is not the case, or there's excessive latency, people end up placing a queue somewhere, which in many cases, still is wrapped in a function call, so it's still quite RPC-ish.

Of course, I'm a small-company type of person. I'm pretty sure that higher-scale teams do need to do more, but... the SO 2023 survey says "40% of respondents work for an organization that has less than 100 employees"- and it's employees- not programmers, and <500 takes it to over 50%...

(The other fun thing that API users need to take into consideration, besides pagination, is another "artificial" limit; API request limits, throttling, etc. Funny what are the reasons we can't have nice things...)
 

ImpossiblyStupid

Smack-Fu Master, in training
81
Subscriptor
it’s hiding the remote aspect that’s bad.
You're going to have to explain why you think that. Abstractly, the locality of operations does not matter. Yes, a network failure is yet another layer/complexity that has to be handled at some level, but is it really all that different a failure mode than running into a thread limit, or a process limit, or a RAM limit, or any other sort of local failure of concurrent processing that could ultimately cause one step of the overall operation to fail?

In fact, I would argue that the ability to recover from failure is better for networked operations. If some worker node out there falls down and dies, it's job can just be reassigned to a different node. Usually that can be done transparently, as far as the local process is concerned.

Regardless, it's not about trying to get something for free. It's about using all available resources to accomplish a task. Maybe I can install and search a local copy of Wikipedia to find an article, but usually I'm going to let their computers do that. Either way, there's no reason for 99% of my code to care; it should be a library call that looks exactly the same.
 

Apteris

Ars Tribunus Angusticlavius
8,938
Subscriptor
So, for example, you could implement an iterator that operates with threads and/or on multiple machines, which keeps your code itself clean while also allowing you to scale it along different dimensions simply by switching out the iterator.
That sounds like very clever code.

You're going to have to explain why you think that. Abstractly, the locality of operations does not matter. Yes, a network failure is yet another layer/complexity that has to be handled at some level, but is it really all that different a failure mode than running into a thread limit, or a process limit, or a RAM limit, or any other sort of local failure of concurrent processing that could ultimately cause one step of the overall operation to fail?
(Speaking for myself, not Shuggy, of course.)

Abstractions leak, and network calls are quantitatively different enough from local calls that they become qualitatively different. Nothing on the local machine is going to hang for 30-60s before returning a response. You're not going to be surprised by a middlebox malfunctioning on a local machine, as you will be when making a network call. Things like trying to reuse a stale TCP connection -- which throws an error now even though the connection went stale 30s ago -- are much rarer when working locally.

When I program locally, my expectation is that most individual calls will complete within a few dozen milliseconds, at most. When I call external resources, I have to think explicitly about latencies, be they also small (e.g. when calling a well-optimized database) or very large (e.g. REST calls). Putting those two regimes behind an abstraction that purports to treat them the same is... questionable.
 

ShuggyCoUk

Ars Tribunus Angusticlavius
9,975
Subscriptor++
You're going to have to explain why you think that.
Apteris nailed it, I'm just covering some specific additional points.
Abstractly, the locality of operations does not matter. Yes, a network failure is yet another layer/complexity that has to be handled at some level, but is it really all that different a failure mode than running into a thread limit, or a process limit, or a RAM limit, or any other sort of local failure of concurrent processing that could ultimately cause one step of the overall operation to fail?
Yes - so covering only the disparity between a local concurrent operation and a remote one with:

In fact, I would argue that the ability to recover from failure is better for networked operations. If some worker node out there falls down and dies, it's job can just be reassigned to a different node. Usually that can be done transparently, as far as the local process is concerned.
The trick here is what assumptions are you making here
I went through this sort of thing a lot with people new to serious scale distributed computing (I've helped define, code APIs for and use/operate calc farm's bigger than some "big" tech).

Assumptions (in no particular order) because you stated you could recover from failure, including death of node
  1. Your jobs are idempotent
  2. The security aspects are tractable (or you simply have no security)
  3. You can predict how long a job takes well enough to determine when it is "not making progress"
  4. The cost of transferring the data needed for your job is less than the cost of just doing it locally
    1. alternate - Spark like: your compute can go to where your data is
  5. You have decent monitoring of the remote world to know when behaviours of some jobs are impacting others negatively.
Designing your system to have those concepts is really quite hard (I've had such fun showing people how to achieve number 1).
Hence my statement up thread which I'll repeat:
Something that causes you to actually restructure a task to create better parallelism sure. Something that just hides high latency operations with non trivial chance of failure/timeout no.
So someone using spark, MapReduce, etc is using a framework which works very hard to push people down those routes. An they can be huge wins (at some costs, I understand spark is still miserable outside of JVM ecosystem - please do correct me if that's changed) and you really do need to learn how to use it well to make it work.

Think of the multi processor systems out there. think of which ones can - at runtime - handle the failure of one of the cpus. They are almost all called mainframes, are highly expensive - have much more limited API/programming languages and fairly erstricted OS's (yes you can tenant a *nix on there but it's from a very small pool), often are actually slower than their non mainframe style cousins.
But they can handle that runtime failure so for things with many nies after the decimal place they are worth the cost.
There are other examples (space tech/medical controllers etc.) and those are all doing it for those last few nines of reliability.

You don't add in those capabilities unless someone wants them and is willing to pay for them

Compute clusters are so much cheaper than they were but they still cost a lot of money and you don't opt into that at small scale unless you want to waste a bunch of time and effort and money because of all the frictions created.

Regardless, it's not about trying to get something for free. It's about using all available resources to accomplish a task.
That's a foolish description of the problem. Anyone suggesting that as-is in business would be laughed out of a room.
You want to use the least amount of resources to accomplish a task in a acceptable amount of time (this is often an optimisation problem with some interesting surfaces but you get the idea).

Maybe I can install and search a local copy of Wikipedia to find an article, but usually I'm going to let their computers do that. Either way, there's no reason for 99% of my code to care; it should be a library call that looks exactly the same.
So long as the library call makes it clear that it's doing that (I'll cover that on Koala's question).
When someone doesn't realise and issues so many calls to it it trips Wikipedia's excessive/dos detection systems and back holes you that will be fun. That either simply can't happen locally, or it's a internal problem to solve (in all but the most messed up of companies that's an easier to solve problem than an external one)
 
Last edited:
  • Like
Reactions: MilleniX

ShuggyCoUk

Ars Tribunus Angusticlavius
9,975
Subscriptor++
Code:
octokit.rest.issues.list_for_repo(owner: "github", repo: "docs", per_page: 2)
<=== to me, that is hiding the remote aspect. What do you consider to hide the remote aspect?
I fixed the code tag
Several reasons I don't think it is:
  • The fact there's rest in there in the namespace is genuinely a significant "I'm not hiding this"
  • it's talking to github, not git.
    • github is remote (you can local host it but the means to talk to it are always network apis
    • I know many people do conflate github and git - but to most programmers the fact it's github means it is remote
The clincher though is this (which I expected but they helpfully state in their docs)
Most of GitHub’s REST API endpoints have matching methods. All endpoint methods are asynchronous, in order to use await in the code examples, we wrap them into an anonymous async function.
The asynchronicity and that almost all the methods are 1:1 mappings to the underlying rest endpoints.

Then there's:
Some endpoints return a list which has to be paginated in order to retrieve the complete data set.

Learn more about pagination.
and
You can add more functionality with plugins. We recommend the retry and throttling plugins.

Learn more about throttling, automatic retries and building your own Plugins.
This is an API designed to make the remote aspects part of it clearly (AFAICT - I've not personally used it). It seems to me an excellent example not not hiding the remoteness.

It largely hides the fact it will be HTTP under the hood (baseUrl etc and user agent are available) but it doesn't hide the remoteness and that's the crucial part.
 

ShuggyCoUk

Ars Tribunus Angusticlavius
9,975
Subscriptor++
So it's clear. I love RestEase and similar wrappers. I see no reason to write my own boiler plate.
The results of libraries like that, or decent code generators and the like, are not IMO hiding the remoteness. They may hide the nitty gritty of HTTP, that there's associated types that may be trivially mapped to the types of the language you are using etc.

As a concrete example of doing that wrong I saw someone blend RestEase and Polly into a 'helper' that tried to do all the retry handling for you without you telling it to, it was using synchronous methods (likely written when there was no async support - but still).

This was miserable because it was a nightmare to diagnose when errors happened, and they took longer to discover/debug. It often led to people putting in double retry wrappers (because they were unaware of the use of the internal retry). It made configuring the auth requirements way harder.

The internal code wasn't crap, it was written in an okay-ish manner. The conceptual idea of that 'join' in a library was just bad
 

koala

Ars Tribunus Angusticlavius
7,579
OK, my definition of "hiding remote" is different from "the name makes it clear it's a remote call".

I did mention paging in my post, but really I think paging is typically an antipattern somewhere (either in your code, or in the API itself).

I have a similar opinion wrt. to error handling/retries, although in principle, I prefer non-API specific abstractions (e.g. in Python using something like https://pypi.org/project/retry/ ), instead of each API client having its own mechanism. However, I'm undecided on this, because I see some drawbacks to this approach and I haven't played enough in this space to have a clear opinion.
 

Apteris

Ars Tribunus Angusticlavius
8,938
Subscriptor
As a concrete example of doing that wrong I saw someone blend RestEase and Polly into a 'helper' that tried to do all the retry handling for you without you telling it to
OkHttp -- a popular HTTP library for the JVM -- does that. It retries silently and aggressively, and you learn about it from error messages like "retried more than 20 times and failed" logged by other components.

Ha ha. Ha.
 
  • Like
Reactions: ShuggyCoUk