• 1 Post
  • 20 Comments
Joined 9 months ago
cake
Cake day: January 3rd, 2024

help-circle
  • QT is a cross platform UI development framework, its goal is to look native to the platform it operates on. This video by a linux maintainer from 2014 explains its benefits over GTK, its a fun video and I don’t think the issues have really changed.

    Most GTK advocates will argue QT is developed by Trolltech and isn’t GPL licensed so could go closed source! This argument seems to ignore open source projects use the Open Source releases of QT and if Trolltech did close source then the last open source would be maintained (much like GTK).

    Personally I would avoid Flutter on the grounds its a Google owned library and Google have the attention span of a toddler.

    Not helping that assessment is Google let go of the Fuschia team (which Flutter was being developed for) and seems to have let go a lot of Flutter developers.

    Personally I hate web frontends as local applications. They integrate poorly on the desktop and often the JS engine has weird memory leaks



  • stevecrox@kbin.runtolinuxmemes@lemmy.worldDistro's depicted as vehicles
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    6 months ago

    Nah Linux Mint is a Kia Ceed.

    Ubuntu is a Ford Focus, they successfully stole the volvo estate market (Debian). The car was fun, good value and very practical. It was everywhere. Then Ford started increasing the size, weight, price, etc… killing the point of the Focus.

    So along comes Kia trying to make a competitor in the Ceed.

    In theory the Ceed is a great car, its super cheap, lots of cabin space, nippy, the inside has every modern convenance, but…

    • It plays engine noises via speakers that aren’t aligned with what you are doing
    • The boot space is rubbish, so 5 people can happily travel in the car you barely fit a suitcase in it
    • There is an steering sensitivity button that stays on at 70 MPH with no indication on the display
    • A Vauxhall Nova just out accelerated you

    Your left wondering why anyone is bothering with hot hatchbacks these days as you climb into your volvo


  • stevecrox@kbin.runtolinuxmemes@lemmy.worldDistro's depicted as vehicles
    link
    fedilink
    arrow-up
    114
    arrow-down
    5
    ·
    edit-2
    6 months ago

    Debian would be a Volvo Estate, its the boring practical family choice, the owner is soneone boring like an architect or a financial advisor.

    Arch is a Vauxhall Nova, second hand battered owned almost exclusively by teenage lads who spend a lot of time/money modifying it (e.g. lowering so it can’t go over speed bumps, adding a massive exhaust to sound good but destroys engine power).

    Fedora is something slightly larger/more expensive like a Ford Focus/VW Golf/Vauxhall Astra owned by slightly older lads. The owners spend their time adding lighting kits and the largest sound systems money can buy.

    Slackware is clearly a Subaru Impreza, at one point the best World Rally Car but hasn’t been a contender for a while. Almost all are owned by rally fans who spend fantastic amounts of time tinkering with the car to get set it up an ultimate rally car. None of the owners race cars.

    OpenSuse is a Nissan Cube, its insanely practical. It should be the modern boring family choice, but it manages to ve too quirky for your architect while not practical enough for van drivers.

    I don’t know the other distros well enough.

    I run Debian btw


  • Immutable distributions won’t solve the problem.

    You have 3 types of testing unit (descrete part of code), integration (how a software piece works with others) and system testing (e.g. the software running in its environment). Modern software development has build chains to simplify testing all 3 levels.

    Debian’s change freeze effectively puts a known state of software through system testing. The downside its effecitvely ‘free play’ testing of the software so it requires a big pool of users and a lot of time to be effective. This means software in debian can use releases up to 3 years old.

    Something like Fedora relies on the test packs built into the open source software, the issue here is testing in open source world is really variable in quality. So somethinng like Fedora can pull down broken code that passes its tests and compiles.

    The immutable concept is about testing a core set of utilities so you can run the containers of software on top. You haven’t stopped the code in the containers being released with bugs or breaking changes you’ve just given yourself a means to back out of it. It’s a band aid to the actual problem.

    The solution is to look at core parts of the software stack and look to improve the test infrastructure, phoronix manages to run the latest Kernel’s on various types of hardware for benchmarking, why hasn’t the Linux foundation set up a computing hall to compile and run system level testing for staged changes?

    Similarly website’s are largely developed with all 3 levels of testing, using things like Jest/Mocha/etc… for Unit/Integration testing and Robots/Cypress/Selenium/Storybook/etc… for system testing. While GTK and KDE apps all have unit/integration tests where are the system level test frameworks?

    All this is kinda boring while ‘containers!’ is exciting new technology




  • Firstly it was just a bit of fun but from memory…

    Twitter was listed as having 2 data centers and a couple dozen satellite offices.

    I forgot the data center estimate, but most of those satelites were tiny. Google gave me the floor area for a couple and they were for 20-60 people (assuming a desk consumes 6m2 and dividing the office area by that).

    Assuming an IT department of 20 for such an office is rediculous but I was trying to overestimate.


  • The Silicon Valley companies massively over hired.

    Using twitter as an example, they used to publicly disclose every site and their entire tech stack.

    I have to write proposals and estimates and when Elon decided to axe half the company of 8000 I was curious…

    I assigned the biggest functional team I could (e.g. just create units of 10 and plan for 2 teams to compete on everything). I assumed a full 20 person IT department at every site, etc… Then I added 20% to my total and then 20% again for management.

    I came up with an organisation of ~1200, Twitter was at 8000.

    I had excluded content moderators and ad sellers because I had no experience in estimating that but it gives a idea of the problem.

    I think the idea was to deny competition people but in reality that kind of staff bloat will hurt the big companies


  • Docker swarm was an idea worse than kubernetes, that came out after kubernetes, that isn’t really supported by anyone.

    Kubernetes has the concept of a storage layer, you create a volume and can then mount the volume into the docker image. The volume is then accessible to the docker image regardless of where it is running.

    There is also a difference between a volume for a deployment and a statefulset, since one is supposed to hold the application state and one is supposed to be transient.



  • Wine attempts to translate Windows calls into Linux, its developed by Codeweavers whose focus is/was application compatibility.

    Valve took Wine and modify it to best support games, the result is called Proton. For example:

    Someone built a library to convert DirectX 9-11 calls and turn them into Vulkan ones, it was written in C++ and is called DxVK.

    Wine has strict rules on only C code and their directx library handles odd behaviour from old CAD applications.

    Valve doesn’t care about that, they care that the Wine DirectX library is slow and buggy and DxVK isn’t. So they pull out Wines and use DxVK.

    There are lots of smaller changes, these are ‘Proton Fixes’, sometimes Proton Fixes are passed on to Wine. Sometimes they can’t but discussion happens and a Wine fix is developed.






  • I actually researched my list, most the technologies were used internally for years and either publically released after better public alternatives had been adopted or it seems buzz reached me years after Google’s first release. So I am wrong.

    Between 2012-2015 I used to consult on Apache Ivy projects (ideally moving them to Maven and purging the insanity people had written). As a result I would get called in when projects had dependency issues.

    The biggest culprits were Guava/GSon, projects would often choose to use them (because Google) and then would discover a bug that had been fixed in a later patch release (e.g. they used 2.2.1 and 2.2.2 had the fix). However the reason they used 2.2.1 was because a library they needed did. Bumping up the version usually caused things to break.

    The standard solution was to ask’why’ they needed Guava/GSon and everytime you would find they are usually some function found in one of the Apache Commons libraries. So I would pull down the commons library rewrite the bit (often they worked identically)

    Fun side note in 2016-2017 I got called to consult on a lot of Gradle projects to fix the same kind of convoluted bespoke things people did with Apache Ivy. Ivy knew the Gradle ‘feautres’ were a massive headache in 2012 and told you to use Maven for those reasons. Ce La vie.

    We tried using Protobuf in 2008 and it was worse than the Apache Axis for JSON conversion (which feels too harsh to say), similarly I had been using AMQP or Kafka for years and tried gRPC when it was released (google say 2016 but I am sure we tried in 2014) and it was worse in every metric I still don’t understand why it exists.

    I was using Vaadin in 2011 and honestly thought GWT was released in 2012. I had to use it in 2014 and the workflow, compile time and look of GWT is just worse than Vaadin.


  • The FAANG companies have an internal kind of elitisim that would make staff less effective.

    If you look at any Google Java library, GWT, GSon, Guava, Gradle, Protobuf, etc… there was a commonly used open source library that existed years before that covered 90% of the functionality.

    The Google staff just don’t think to look outside Google (after if Google hasn’t solved it no chance outsiders have) and so wrote something entirely from scratch.

    Then normally within 6 months the open source library has added the killer new feature. The Google library only persists because people hold FAANG as great “Its by Google so it must be good!” Yet it normally has serious issues/limitations.

    The Google libraries that actually suceeded weren’t owned by Google (E.g. Yahoo wrote Hadoop, Kubernetes got spun away from Google control, etc…).


  • I wouldn’t use “certified” in this context.

    Limiting support of software to specific software configurations makes sense.

    Its stuff like Debian might be using Python 3.8 Ubuntu Python 3.9, OpenSuse Python 3.9, etc… Your application might use a Python 3.9 requiring library and act odd on 3.8 but fine on 3.7, etc… so only supporting X distributions let you make the test/QA process sane.

    This is also why Docker/Flatpack exist since you can define all of this.

    However the normal mix is RHEL/Suse/Ubuntu because those target businesses and your target market will most likely be running one.


  • I suspect they mean around packaging.

    I honestly believe Red Hat has a policy that everything should pull in Gnome. I have had headless RHEL installs and half the CLI tools require Gnome Keyring (even if they don’t deal with secrets or store any). Back in RHEL 7, Kate the KDE based Text Editor pulled in a bunch of GTK dependencies somehow.

    Certification is really someone paid to go through a process and so its designed so they pass.

    Think about the people you know who are Agile/Cloud/whatever certified and how all it means is they have learnt the basic examples.

    Its no different when a business gets certified.

    The only reason people care is because they can point to the cert if it all goes wrong


  • stevecrox@kbin.runtoLinux@lemmy.mlI'm so frustrated rn.
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    8 months ago

    Debian isn’t old == stable, its tested == stable.

    Debian has an effective Rolling distribution through testing than can get ahead of Arch.

    At some point they freeze the software versions in testing and look for Release Critical and Major bugs. Once they have shaken everything and submitted fixes where possible. It then becomes stable.

    The idea is people have tested a set baseline of software and there are no known major bugs.

    For the 4-5 releases Debian has released every 2 years (Similar to Ubuntu LTS). Debian tends to align its release with LTS Kernel and Mesa releases so there have been times the latest stable is running newer versions than Ubuntu and the newest software crown switches between Ubuntu LTS and Debian each year.

    For some the priority to run software that won’t have major bugs, that is what Debian, Ubuntu LTS and RHEL offer.