• 1 Post
  • 153 Comments
Joined 10 months ago
cake
Cake day: September 24th, 2023

help-circle
  • IMO Julia just had way too many big issues to gain critical mass:

    1. Copied 1-based indexing from MATLAB. Why? We’ve known that’s the worse option for decades.

    2. For ages it had extremely slow startup times. I think because it compiles everything from C, but even cached it would take like 20s to load the plotting library. You can start MATLAB several times in that time. I believe they improved this fairly recently but they clearly got the runtime/compile time balance completely wrong for a research language.

    3. There’s an article somewhere from someone who was really on board with Julia about all the issues that made them leave.

    I still feel like there’s space for a MATLAB replacement… Hopefully someone will give it a better attempt at some point.


  • Anything that helps scientists and engineers move away from MATLAB is welcome.

    The MATLAB language may be pretty bad but IMO that’s not what makes MATLAB good. Rather it’s:

    1. Every signal processing / maths function is available and well documented. I don’t know how well Julia does on this but I know I wouldn’t want to use Python for the kinds of things I used MATLAB for (medical imaging). You don’t have to faff with pip to get a hilbert transform or whatever…

    2. The plotting functionality is top notch. You can easily plot millions of points and it’s fast and responsive. Loads of plotting options. I just haven’t found anything that comes close. Every other option I’ve tried (a lot) only works for small datasets.





  • Pip and venv have been tools that I’ve found to greatly accelerate dev setup and application deployment.

    I’m not saying pip and venv are worse than not using them. They’re obviously mandatory for Python development. I mean that compared to other languages they provide a pretty awful experience and you’ll constantly be fighting them. Here’s some examples:

    • Pip is super slow. I recently discovered uv which is written in Rust and consequently is about 10x faster (57s to 7s in my case).
    • Pip gives terrible error messages. For example it assumes all version resolution failures are due to requirements conflicts, when actually it can be due to Python version requirements too so you get insane messages like “Requirement foo >= 2.0 conflicts with requirement foo == 2.0”. Yeah really.
    • You can’t install multiple versions of the same dependency, so you end up in dependency resolution hell (depA depends on foo >= 3 but depB depends on foo <= 2).
    • No namespace support for package names so you can’t securely use private PyPI repositories.
    • To make static typing work properly with Pyright and venv and everything you need some insane command like pip install --conf-settings editable-mode=compat --editable ./mypackage. How user friendly. Apparently when they changed how editable packages were installed they were warned that it would break all static tooling but did it anyway. Good job guys.
    • When you install an editable package in a venv it dumps a load of stuff in the package directory, which means you can’t do it twice to two different venvs.
    • The fact that you have to use venvs in the first place is a pain. Don’t need that with Deno.

    There’s so much more but this is just what I can remember off the top of my head. If you haven’t run into these things just be glad your Python usage is simple enough that you’ve been lucky!

    I’m actually in the process of making such a push where I’m at, for the first time in my career

    Good luck!


  • Python is written in C too, what’s your point?

    The point is that eliminating the GIL mainly benefits pure Python code. Numpy is already multithreaded.

    I think you may have forgotten what we’re talking about.

    the new python version was less than 50 lines and was developed in an afternoon, the c++ version was closing in on 1000 lines over 6 files.

    That’s a bit suss too tbh. Did the C++ version use an existing library like Eigen too or did they implement everything from scratch?


  • The only interpreted language that can compete with compiled for execution speed is Java

    “Interpreted” isn’t especially well defined but it would take a pretty wildly out-there definition to call Java interpreted! Java is JIT compiled or even AoT compiled recently.

    it can be blazingly fast

    It definitely can’t.

    It would still be blown out of the water by similarly optimized compiled code

    Well, yes. So not blazingly fast then.

    I mean it can be blazingly fast compared to computers from the 90s, or like humans… But “blazingly fast” generally means in the context of what is possible.

    Port component to compiled language

    My extensive experience is that this step rarely happens because by the time it makes sense to do this you have 100k lines of Python and performance is juuuust about tolerable and we can’t wait 3 months for you to rewrite it we need those new features now now now!

    My experience has also shown that writing Python is rarely a faster way to develop even prototypes, especially when you consider all the time you’ll waste on pip and setuptools and venv…





  • Meanwhile current AI is pretty much useless for any purpose where you actually need to rely on a decent chance to get quality results without human review.

    Sure but there are tons of applications where you can tolerate lower than human levels of performance.

    The amount of time ChatGPT has saved me programming is crazy, even though it struggles with more complex or niche tasks.

    Here’s what I used it for most recently:

    Write an HTML page that consists of a tree of <details> elements with interspersed text. These are log files with expandable sections. The sections can be nested.

    The difficult part is I want the text content that is stored in the HTML file to be compressed with zlib and base64 encoded. It should be decompressed and inserted into the DOM once when each DOM node first becomes visible.

    Be terse. Write high quality code with jsdoc type annotations.

    It write a couple of hundred lines of code that was not perfect but took 5 minutes to fix. Probably saved me an hour writing it from scratch (I’m not a web dev so I’d have to look things up).


  • Modern AI (LLMs etc) is definitely a revolution. Anyone that has tried ChatGPT can tell that, just like the only people saying the iPhone was a fad were the ones that hadn’t used it.

    The thing that is hyped around AI is companies just trying to shove it into everything, and say stuff uses AI when it is totally inappropriate. That doesn’t mean AI itself is nonsense though. The same thing happened with the iPhone (everything had an app even if it made no sense).


  • Sooo much inane naysaying in that Rust for Filesystems article. I’m glad there are people with the stamina to push through it.

    Part of the problem, Ted Ts’o said, is that there is an effort to get “everyone to switch over to the religion” of Rust

    I would say a bigger problem is that there are people that think Rust is some kind of religion with acolytes trying to convert people. Is it really that hard to distinguish genuine revolutions (iPhone, Rust, AI, reusable rockets, etc.) from hyped nonsense (Blockchain/web3, Metaverse, etc.)?

    These things are very obvious IMO, especially if you actually try them!




  • Do you actually have any specific, tangible issue with submodules?

    Yeah sure. These are few that I can remember off the top of my head. There have been more:

    • Submodules don’t work reliably with worktrees. I can’t remember what kind of bugs you run into but you will run into bugs if you mix them up. The official docs even warn you not to.

    • When you switch branches or pull you pretty much always have to git submodule update --init --recursive. Wouldn’t it be great if git could do that for you? Turns out it can, via an option called submodule.recurse. However… if you use this you will run into a very bad bug that will seriously break your .git directory.

    • If you convert a submodule to a directory or vice versa and then switch between them git will get very confused and you’ll have to do some rm -rfing.

    Even in the cases you’re clearly and grossly misusing them

    Oh right, so the bugs in Git are my fault. Ok whatever idiot.


  • Neat, but I’d really like it to just handle memory properly without me having to tweak swap and OOM settings at all. Windows and Mac can do it. Why can’t Linux? I have 32GB of RAM and some more zswap and it still regularly runs out of RAM and hard resets. Meanwhile my 16GB Windows machine from 2012 literally never has problems.

    I wonder why there’s such a big difference. I guess Windows doesn’t have over-commit which probably helps apps like browsers know when to kick tabs out of memory (the biggest offender on Linux for me is having lots of tabs open in Firefox), and Windows doesn’t ignore the existence of GUIs like Linux does so maybe it makes better decisions about which processes to move to swap… but it feels like there must be something more?