[ prog / sol / mona ]

prog


Hardware

1 2021-01-29 18:04

Most of the conversations here are about software, but what about Hardware? What hardware do you use? What hardware would you like to use? Can we hope for RISC-V/open-hardware solutions without backdoors in the next decade? Does anyone expect any fundamental chances like personal computer or mobile computing? Or any interesting stories on forgotten or unpopular hardware?

2 2021-01-29 19:39

I primarily use Thinkpads, and am typing this on a Lemote Yeeloong. I'd like to have nicer hardware, such as a Lisp Machine, Xerox Star, or another high-level architecture; machines existed decades ago which aren't susceptible to the flaws still haunting what we currently use, but the market chose them because the market doesn't care about producing value. Broken machines lead to broken software lead to more jobs.

There's absolutely nothing compelling about RISC-V. It's a RISC (Notice how just about everything is called RISC; this is because it's actually a cult.), and it's basically a cleaner C language machine than other RISC designs, but this isn't saying much. It's memory-inefficient, requires the hardware to combine operations (macro-ops) for efficiency, and is really just a bunch of academics masturbating over their clean design that means nothing. Urbit's Nock is also useless, but at least it's different.

More recently, I've been looking into non-von Neumann designs, and there's really no good sense in what we currently have. The future is smaller people beating the incumbents by using fundamentally more efficient designs that simply can't be beaten with conventional designs.

3 2021-01-30 03:48 *

Can we hope for RISC-V/open-hardware solutions without backdoors in the next decade?

If the general mentally is still monolithically risc-v, it will always be no.
>>2

and is really just a bunch of academics masturbating over their clean design that means nothing

Don't forget the lunix community having a virtue menstruation!

4 2021-01-30 04:16

>>1
I have plans to migrate to an in-order corebooted machine without a trusted execution environment. Obviously it's still an idiot box, but at least in principle it's an idiot box capable of doing what it's told. Historically I've run a system which achieves my last two constraints but not the first, and for the last few months I've run on a machine which fails to meet any of these constraints. I'd also like to have a keyboard which isn't a toy such as a Maltron or Kinesis. At some point I'd also like to figure out how to make a GPS disciplined oscillator clock, and non-whitening TRNG for my machine.

5 2021-01-30 04:40

>>1
https://github.com/githubteacher/furry-computing-machine

6 2021-01-30 10:16

>>2

There's absolutely nothing compelling about RISC-V.

Having a processor without IME would be nice.

7 2021-01-30 11:17

>>6

Having a processor without IME would be nice.

This isn't compeling if there's tons of processors without IME, even for x86.

8 2021-02-01 03:51 *

The future is smaller people beating the incumbents by using fundamentally more efficient designs that simply can't be beaten with conventional designs.

Pipe dream.

9 2021-02-01 07:58

>>8
that exactly what historically happens and its called 'disruptive innovation', where
new tech spreads from lone wolf inventors to uproot entire industries.
Its entirely possible someone invents e.g. a memory-centric arch that beats everything else
at paralel computation.

10 2021-02-01 16:37

>>9
It doesn't seem like there has been any innovation in 15+ years, just more of the same and exploiting the improved manufacturing process. They're going to keep beating the dead horse as long as they can, Intel has plans to make 1.4nm chips, and are likely to continue pushing the material engineering until there is nothing left but ketchup stains and x86 or similar.

The issue is that any significant change in hardware would require a change in software to exploit them. The switching cost is too high to be competitive, and the barriers to entry are too high for any madman to even try. The way you make money in software has little to do with quality, performance, or utility anyway. Software seems entirely driven by novelty, and manipulative interfaces.

11 2021-02-01 17:04

>>10

The switching cost is too high to be competitive

the current PC era inertia is because we reached a performance level
that is comfortable for most use-cases for all commodity hardware,
so to justify switching an average consumer need something orders of magnitude better:
a x100 faster machine at same cost. a x10 faster machine might make sense with lower cost,
and x5 will not be even considered to migrate architectures(Apple can afford that though)
More important is the idea the x100 faster has to be visually/intuitively seen by consumers
to create mass market appeal: a niche use that accelerates specific class of software won't cut it,
so specialized hardware will not be adopted (that why specialized cards cost much $$$).
A new arch has to be.
1.competitive on generic(scalar) and bulk(vector/gpu) computation.
2.simpler than x86 but extensible
3.providing functions not in x86, accelerating common algorithms or software better.
4.more power efficient than x86, its a niche that battery-operated devices, datacenters and embedded require, but it also extends to desktop where heat increases failure rates.

12 2021-02-01 18:15

>>11

performance level

Consumers just want something they've not seen before, they couldn't care less if it's actually faster, or by how much, or in what applications. If you accelerate in hardware the ability for the user to do something idiotic and useless like swapping faces in real time video, consumers will shit themselves and wait out in -20C to buy it. We have reached a level where performance doesn't matter, but users still suck dick 9-5 in order to be able to buy a useless shiny distracting idiot box. The switching cost has to do with rewriting all the useless shiny distracting idiot software that consumers use, and the lack of competitive advantage would be in providing new idiotic novel experiences for the consumer at the same time.

13 2021-02-01 19:50

>>12
Usually the software "novelty" appears as slow, low-performance product(e.g. a heavy resource 3D video game) that bring average-consumer-hardware to its knees.
This creates market demand for better hardware(e.g. graphic accelerators, today known as video cards were a novelty product for early 3D software/games) in turn allowing more complex software to utilize the accelerator-hardware(current 3D game industry), until
it reaches another bottleneck(e.g. raytracing) forcing the market demand for accelerating that(RTX).
Same thing can be said of VR headsets, which is hitting harsh performance limits already
and would be the product that "consumers will shit themselves " for if it was perfected in form and function - something that shows we're not at the industry's peak.

14 2021-02-01 20:29

What isn't mentioned above, is consumer standards rise with 'average expectations'
e.g. smooth 60hz vs 120hz refresh rate considered a premium, smooth 120hz requiring a faster video card.
Once the 'average experience' is established standard, the market demand appears for something
above; the next software rapidly converts the premium into mundane average.

15 2021-02-01 21:13

>>13
Yes, the needs of useless shiny distracting idiot software are reifed into a useless shiny distracting idiot box which then effects future useless shiny distracting idiot software. Base and superstructure. Performance only exists with respect to some application, accelerating ray tracing, or even just optimizing for it, does not improve e.g. compile times. In fact for some users accelerating ray tracing is completely useless and does not improve performance for them in any way.

Point I was making here is simply that domain general performance does not exist, and what you're actually optimizing for is idiotic novel experiences, and this is where the difficulty comes in for truly innovative hardware (that is to say hardware which to use effectively would require rewriting most software). How to you re-implement all the existing useless shiny distracting idiot software people already use while creating something some sort of idiotic novel experience for them so they line up in -20C. Answer is you don't, and so you're stuck beating the horse until there is nothing left but ketchup stains and x86 or similar.

16 2021-02-01 21:16

https://news.ycombinator.com/item?id=25992663 itanium status;deprecated

17 2021-02-01 21:20

accelerating ray tracing, or even just optimizing for it, does not improve e.g. compile times
In fact for some users accelerating ray tracing is completely useless and does not improve performance for them in any way.

you can use GPUs to accelerate any code https://en.wikipedia.org/wiki/OpenACC

18 2021-02-01 21:29

>>17
without gamers modern neural networks and GPGPU computing won't exist.
the 'useless shiny distracting idiot boxes' are driving market forces that shape your experience, though
you're probably thinking of oneself as unaffected by consumerism and being above the plebs:
you're using products that depend on masses buys and selecting for the next 'useless shiny distracting idiot box' due economic factors of scale - and unless you program your own FPGAs and microcontrollers the hardware is what the market dictates it to be.

19 2021-02-01 21:34

>>17
To my knowledge compilers aren't generally amenable to parallelism. I'm not aware of any which do this, or really how this would work.

>>18
Vector machines predate GPUs, I haven't made my mind up on these but I wish neural networks did not exist. I'm painfully aware of how market forces influence my life, and I'm extremely pissed off that I have to use an idiot box.

20 2021-02-01 21:52

Vector machines predate GPUs, I haven't made my mind up on these but I wish neural networks did not exist.

what is meant here, is GPUs made neural networks cheap to program and accessible(as consumer level hardware). You could probably buy/rent some dedicated NN hardware(e.g. google TPUs) that
is more specialized(like these vector machines) but it would much more expensive and limited in usability. GPUs turned to be generic computing devices(OpenCL,OpenACC,compute shaders,CUDA,RocM) because flexibility won over fixed-function pipelines.
Ironically the "complex vector processing" turned out to be best done by masses of tiny,dumb CPUs( GPU cores) working as massive network executing some variation of compute shaders(invented to add dynamically computed graphic overlays on a framebuffer).

21 2021-02-01 22:12

>>19

To my knowledge compilers aren't generally amenable to parallelism. I'm not aware of any which do this, or really how this would work.

its possible, modern efforts don't target C/C++(which use make -j to parallelize gcc):
https://rustc-dev-guide.rust-lang.org/parallel-rustc.html

22 2021-02-01 23:13

>>20
Massively parrallel systems are a relatively recent popularization, and I'm largely unfamiliar with the GPU terminology you've used here. Regardless ILLIAC IV existed, and there were a few similar designs. By constraining the argument to “capable of doing cheaply” you naturally limit any discussion to consumer hardware, and so clearly from this view consumer hardware is where all the innovation occurred. I've also just realized that GPUs also existed within my timeline, so there is that as well. Just as a reminder here are my claims:

1. There hasn't been any innovation in hardware in 15+ years, just exploitation of the improved manufacturing process.
2. This is because we're optimizing not for novel experiences, and existing experiences and it would be very costly to port all the existing experiences, while developing novel experiences.

You seem to both disagree with my definition of innovation in hardware, and that the optimization for novel experiences has some sort of general utility. While my initial claims were in the context of x86 we've expanded them to GPUs which I am okay with. Now the basic contention is that the backwards compatible (and therefore by my definition not innovative) RSX (which I know little about) is innovative by your definition, and useful in neural networks (which I see as a mechanism for idiot software) and is therefore generally useful. I think I'm willing to leave the disagreement here, I'm sure we've both wasted enough time.

>>21
iirc -j refers to compiling multiple files which don't depend on one another concurrently on CPU. Rust seems to be talking about concurrency as well, the crate thing would probably be done using SIMD in parrallel if I had to guess though. I do wonder if there are any compilers which run on the GPU, or do considerable work in parallel (rather than concurrently).

23 2021-02-02 00:49 *

>>22
I worked 50.8 hours for university last week, and what little free-time I've had has been spent poorly. I've spent altogether too much time doing things like having discussions on web forums. I'm going to try my best to only visit these places if I have a question in need of answering, as selfish as this may be, it seems to me necessary. The message is being posted for the purpose of personal accountability, in hopes that anyone who recognizes me should I post here would rightly be disgusted knowing the situation.

24 2021-02-02 07:17 *

>>22
Do you think any rust compiler will get autovectorisation techniques?
>>23
Same but worse, hang in there.

25 2021-02-02 09:21

>>24
LLVM IR (which rust compiles to) can run on gpuis.
https://mlir.llvm.org/docs/Dialects/GPU/

26 2021-02-02 17:27 *

In my experience compiling big C/C++ projects on many cores, it is usually disk access that becomes the bottleneck, and memory during linking.

27 2021-02-05 13:24

New LISP hardware just dropped in
https://twitter.com/lisperati/status/1357029088343506944

28 2021-02-05 14:00 *

>>27

It's a c machine running emacs.

Quality bait.

29 2021-02-06 10:48

Is this a good gaming laptop?
https://balthazar.space/wiki/Balthazar

30 2021-03-23 01:33 *

$150 Risc-V computer soon.
https://riscv.org/news/2021/01/beagleboard-org-and-seeed-introduces-the-first-affordable-risc-v-board-designed-to-run-linux/

31


VIP:

do not edit these