• 0 Posts
  • 23 Comments
Joined 2 years ago
cake
Cake day: August 2nd, 2023

help-circle

  • I keep hearing good things however I have not yet seen any meaningful results for the stuff I would use such a tool for.

    I’ve been working on network function optimization at hundreds of gigabit per second for the past couple of years. Even with MTU-sized packets you are only given approximately 200 ns for processing (this assumes without batching). Optimizations generally involve manual prefetching and using/abusing NIC offload features to minimize atomic instructions (this is also running on arm, where atomic fetch and add in gcc is compiled into a function that does lw, ll, sc and takes approximately 8 times the regular memory access time for a write). Current AI assisted agents cannot generate efficient code that runs at line rate. There are no textbooks or blogs that give a detailed explanation of how these things work. There are no resources for it to be trained on.

    You’ll find a similar problem if you try to prompt them to generate good RDMA code. At best you’ll find something that barely works, and almost always of the code cannot efficiently utilize the latency reduction RDMA provides over traditional transport protocols. The generated code usually looks like how a graduate CS student may think RDMA works, but is usually completely unusable, either requiring additional PCIe round-trips or has severe thrashing issues with main memory.

    My guess is that these tools are ridiculously good at stuff it can find examples of online. However for stuff that have no examples, it is woefully under prepared and you still need a programmer to manually do the work line by line.


  • As much as I hate the concept, it works. However:

    1. It only works with generalized programming. (E.g. write a python script that passes csv files) For any specialized fields this would NOT work (e.g. write a DPDK program that identifies RoCEv2 packets and rewrite the IP address)

    2. It requires the human supervising the AI agent to know how to write the expected code themselves, so they can prompt the agent to use specific techniques (e.g. use python’s csv library instead of string.split). This is not a problem now since even programmers out of college generally know what they are doing.

    If companies try to use this to avoid hiring/training skilled programmers, they will have a very bad time in the future when the skilled talent pool runs dry and nobody knows how to identify correct vs incorrectly written code.








  • Some people play games to turn their brains off. Other people play them to solve a different type of problem than they do at work. I personally love optimizing, automating, and min-maxing numbers while doing the least amount of work possible. It’s relatively low-complexity (compared to the bs I put up with daily), low-stakes, and much easier to show someone else.

    Also shout-out to CDDA and FFT for having some of the worst learning curves out there along with DF. Paradox games get an honorable mention for their wiki.



  • I don’t think either of us is the target audience here. I can see a “cheaper” (questionable) Pro laptop being useful for students going into college with a limited budget. An undergrad CS/graphic design degree shouldn’t tax an 8gb machine too much, assuming students shut down everything else when doing their once-a-semester major rendering/compiling/model training. If people just want Macbook pro software with more ports, a “cheaper” machine is better than none. Personally, I would still get a used/refurbished machine though.

    That being said, my current laptop workload tends to be emacs, qpdfview, Firefox, and tmux on EL9. For the remaining stuff, I usually just spin up a VM then ssh/xrdp into it. As for slack, teams, jabber, etc, I’m happy to report I’ve been out of industry/IT for 1+ years and don’t plan on going back anytime soon. For all I care, Apple can call their models unicorn edition. As long as it sells it’s not stupid.




  • 8gb RAM and 256 gb storage is perfectly fine for a pro-ish machine in 2023. What’s not fine is the price point they are offering it (but if idiots still buy that, that’s on them and not apple). I’ve been using a 8gb ram 256 gb storage Thinkpad for lecturing, small code demos, and light video editing (e.g. zoom recordings) this past year, it works perfectly fine. But as soon as I have to run my own research code, back to the 2022 Xeon I go.

    Is it Apple’s fault people treat browser tabs as a bookmarking mechanism? No. Is it unethical for Apple to say that their 8GB model fits this weirdly common use case? Definitely.





  • I might switch to it once bitwarden support comes out.

    Worst case I lose my Google account. Which I only use for Android (no sync, no mail, no purchases)

    Best case, Google no longer defaults to mobile 2fa and finally accepts i want to use totp every time.

    Also, how would the biometrics requirement work if all im doing is storing the whole thing in a Bitwarden vault?