Follow

Hi Scholar.Social, thank you for having me!

I am a computing scientist at the University of Glasgow in Scotland.

I focus on low-carbon and sustainable computing.

· · Web · 1 · 6 · 15

I wrote an article about the need for low-carbon and and the path towards zero-carbon computing.

wimvanderbauwhede.github.io/ar

@wim_v12e

Does the path towards zero-carbon computing really pass through Github ?

@lienrag
Maybe not, but maybe the path to as many eyeballs as possible in computing does.
There's always @codeberg to host things, but I don't know how they rank in search engines...
@wim_v12e

@schmidt_fu
If folks choose a platform for search engine ranking, we'll never end monopoly :)

And for the record, we aren't that bad. For stumble about Codeberg projects being higher ranked than stuff on GitHub more frequently each month.
@lienrag @wim_v12e

@wim_v12e
Very interesting read. I had similar thoughts in senioradmin.de/GreenIT2020.htm (in german).

One problem is, that today's software is not efficient. Why is it, that computers are 10 to 100 times faster than 20 years ago while we still do basically the same things with our computers and it doesn't feel significantly faster? It is mostly because software and especially the web is more demanding.

@Haydar

Thank you for your article! Where did you get those detailed figures for the laptops?

I agree about software inefficiency. One of the main reasons for this is that time to market matters more than energy efficiency. Another reason is that many developers are simply unaware of the carbon cost of their software; and yet another reason of course that many developers are not capable of writing efficient software.

@wim_v12e The exact number are publicly available from the manufacturers, I have linked them in the footnotes.

I think complexity is also a problem. Software in general tends to get more complex over time. To handle this complexity developers are using layers to abstract things. In the end mutliple layers are used to provide a simple function. This comes at the cost of efficiency.

@Haydar
Ah, I see, I had overlooked the small 2,3,4.

There is the notion of cost-free abstractions, essentially they exist only at design time and are removed at run time. A lot of the current inefficiency at that level is simply there because the compiler and runtime developers have not bothered to do those optimisations. That is because such optimisations are hard, and the language projects don't make it a focus.

@wim_v12e This is an excellent start to a much needed area of study. Biocomputing is still a ways off and we will be using silicon based computing for a long time. I have been thinking about the consumption of energy needed in our current centralized data centers and how self hosting provides some improvement to this issue.

@wim_v12e I rely very little on mass data centers since I host most of the services my little community needs on a RPI that doesn't get too hot. Much of what is happening at data centers is data mining, analytics, blockchain, content replication, and advertising. All of this uses extra processing, power, and heat. My services don't have that excess crap and so run more efficiently to serve more local needs where electrons travel short distances.

@Bernard I like the idea of people hosting their own services, even though purely in terms of energy efficiency it may not be optimal. The ideal would be that people se;f-hosting have access to local renewables (e.g. solar or wind). I have this vision of a federated low-tech cloud where the services are offered by the servers that have intermittent renewable energy available, so basically when you access a service the system works out which of the servers is up and running.

@wim_v12e That is an interesting idea to have servers come on and off depending on power availability. With current architectures that implies redundancy of compute and replication of data which could increase overall power consumption. Distributed databases and application servers are evolving. To keep your study focused, I suggest leaving the issues of power generation and balancing from different intermittent sources to the electricity grid, but I would love to learn more about the idea.

@Bernard The interesting research question here is exactly that: how much redundancy and data movement is needed to guarantee a certain level of uptime. This would be aimed at services that only handle small amounts of network traffic. Otherwise they are not practical for home servers anyway, and the purpose is to look at sufficiency.

Sign in to participate in the conversation
Scholar Social

Scholar Social is a microblogging platform for researchers, grad students, librarians, archivists, undergrads, academically inclined high schoolers, educators of all levels, journal editors, research assistants, professors, administrators—anyone involved in academia who is willing to engage with others respectfully.