[svlug] Intel Active Management Technology (AMT): not necessarily your friend

Ivan Sergio Borgonovo mail at webthatworks.it
Sun May 7 07:17:08 PDT 2017


On 05/06/2017 09:03 PM, Rick Moen wrote:
> Quoting Ivan Sergio Borgonovo (mail at webthatworks.it):

>> I think the problem in the MIPS/bucks ratio is mainly a problem of
>> produced units and software support. x86 architecture is definitively
>> not superior to alternatives.
>
> Moreover, it's really rare that for Linux use you particularly care
> about CPU power.  In most use-cases and with most hardware (particularly
> hardware aimed at MS-Windows), CPU grunt is just not the limiting
> factor.
>
> And:
>
>> There is a market for media player/smart TV that requires fast CPU/GPU
>> to play high resolution video.

> One of the biggest changes I've noticed in the market compared to a
> decade or so is the rise of GPU circuitry -- one of the few examples of
> dedicated coprocessors to have become truly prevalent.  Most of those
> GPU designs have a secret-sauce problem, though.  E.g., Linux on my
> MacBookAir has drastically worse video performance than OS X does.

Last time I was interested in how GPU worked I was reading 
"Programmer's guide to the EGA and VGA cards, second edition".
Now I've no idea why they need a software secret sauce.
I don't get if they need some software secret because software is too 
revealing of the underlying hardware or if software itself is the secret 
sauce.

You'd hope that since GPU are more and more used for HPC you'll see more 
standardization/openness.

But I'm not absolutely sure this trend is going to last or at least have 
a positive fall back on *video* drivers in the coming 3-5 years.

GPU are parallel number crunching machines, very suited for linear 
transformations. Linear transformation are ubiquitous in science.
Again economy of scale played a role.
PC GPU were abundant, scientist started to exploit some of their 
characteristics.

There will be a point where scientist will have to go faster and video 
cards and "parallel floating point units" characteristics will start to 
diverge.

On the other side we don't have 8D audio, ultra-hyper-dolby-surround 
etc...[1] so sooner or later having better "video" performance will be 
irrelevant. The *video* part of a video board won't be that important 
and you may end up having dedicated nearly general purpose coprocessors.
eg. you could put "consumer" speech recognition software in the video 
board (eg. noisy room with more than one person speaking).

So the future is more standardization. It has to be seen if it will be 
in the coming 3-5 years or in the next 5-10.

Unfortunately it's not happening now, when I'd like to buy my Ryzen 7 
and 3x4K display for my work and you still have to buy a $300 video 
board to have 3 DisplayPort even if you're just interested in simple 2D 
stuff, you can't offload gcc compilation to the GPU and you still have 
to deal with buggy drivers.

>> I don't know about architecture advantage of j-core vs RISC-V but
>> considering j-core is just on FPGA but you can buy real RISC-V and
>> considering who's behind RISC-V I'd bet on RISC-V.

> The point of j-core is to be a vanishingly rare example of a totally
> open, all the way down to the microcode and throughout all of its
> functions, _and_ very inexpensive SoC that is powerful enough for many
> types of current computing.  And no patent problems of any kind, as
> Hitachi's (and everyone else's) SuperH design patents have all
> expired.

This looks more like an University laboratory. You don't need to know 
the microcode to have a standard ISA.

The RISC-V license gives much more freedom to actual implementers 
without posing a burden on programmers.

And yeah we may agree on the fact that it is better to know everything 
but I'd argue if it is better to have a truly open CPU on paper or a 
mostly open and standardized ISA on silicon.
And yeah rms was right on BitKeeper (and many other things) and he 
definitively wrote much more relevant software than ESR but I'm running 
Linux and not Hurd, systemd and not oh sorry I didn't mean to be rude <g>.

> As I said, I don't know anything about RISC-V, but even purportedly
> 'open' hardware designs, even with Coreboot, etc., have specialised
> proprietary hardware subsystems to run USB and hard-drive controllers,
> run ACPI and low-level system management services, and much more.  With
> a j-core SoC, none of those black boxes would exist.  It's turtles all
> the way down.

RISC-V has a BSD like license so yeah, implementers may decide to 
"close" part of the design or simply not standardize it.
This has pros and cons.

It still looks more interesting than j-core but I don't see it getting 
enough traction in the near future.
Numbers count.
Better architectures than x86 have been around for many years.
ARM is getting some traction and may become a competitor even in 
PC/datacenter space to x86. But it took many years and huge numbers 
conquered in another market.

POWER was/is a nice architecture backed up by no less than IBM, even 
IA64 was a nice architecture backed up by no less than HP.
I'm not that convinced that being Open and "supported" by big names 
RISC-V is going to be more successful than POWER.

There are very few incentives to consider anything else other than ARM.
Even if supporting a new architecture ecosystem is getting easier than 
before to be competitive a new architecture should be at least one of:
- much faster
- assure large numbers of sale from the beginning to justify investment 
in making fast enough implementations to be competitive with ARM and x86.

ARM is "good enough" and somehow better than x86.
No new architecture is going to be substantially faster than ARM unless 
it is a specialized architecture that can't provide "big numbers".

If ARM gets into the PC/datacenter space the software guys will be more 
motivated to write tools to make it even easier to support multiple 
architectures and maybe in a more distant future we will see newer 
architectures getting more popular and more specialization hidden behind 
a better software layer.

Doubling the number of transistors every 18 months doesn't mean we are 
doubling the performance of computer hardware.
At some point in time if cost to switch architecture will go down 
enough, switching architecture will be convenient to squeeze every bit 
of performance from hardware.

I'm not holding my breath.


BTW OpenPOWER seems to be open just in the name.

[1] but still

http://hothardware.com/news/10000-ethernet-cable-claims-earth-shattering-advancement-in-audio-fidelity-if-youre-stupid-enough-to-buy-it

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it http://www.borgonovo.net




More information about the svlug mailing list