[svlug] j-core vs RISC-V
rob at landley.net
Sun May 14 22:43:54 PDT 2017
On 05/10/2017 10:03 AM, Ivan Sergio Borgonovo wrote:
> On 05/10/2017 01:08 AM, Rob Landley wrote:
>> Oh there's all sorts of fun if you dig down a bit. There are
>> fabrication-time attacks you can do just by tweaking dopant and not even
>> moving wires (so taking your chip apart with Taiwan's
>> reverse-engineering equipment and examining it won't spot the changes):
> Thanks for sharing. I had the suspect it could be done, but seeing it in
> full technicolor is different.
I point out that was by no _means_ a full list. There are dedicated
presentations on this if you dig.
>> The most surprising email I got back when I was maintaining busybox was
>> from the administrator of the big wargames-style display at Cheyenne
>> Mountain (which was still open at the time) saying it was running
>> busybox. When I went "dear FSM _why_" he said they had to audit every
>> line of code that goes into those systems and they'd rather audit 1
>> megabyte of busybox than 110 megabytes of equivalent gnu crap. (Can't
>> (My opinion is you don't secure a system by _adding_ stuff. You secure a
>> system by _removing_ stuff.)
> You want a very simple system of defence... build up a big wall.
I've long had a policy that if there an xkcd explaining why not to do
it, you should rethink your approach. (Every rule has an exception, and
I think https://xkcd.com/1782 is probably it, but we'll see when 2051
There's a similar rule that if the resident has promised to do it, it's
probably a bad idea. (The exception to the rule that every rule has an
exception is rule 34.)
(I also note that godwin's law probably applies to the current
administration, and have no idea why you used this analogy.)
> You can spend billions on that wall alone, will it get perfect?
> Will it work?
If you spend billions on something up front all in one go before testing
anything, it probably won't work, no. If you spend billions one
something already proven not to work, that its own designers tell you
won't work, it's probably even less likely to work.
> For the same reason I'm not completely sold on the role of j-core in
We're not doing it explicitly for security, but we think it's something
that can _be_ secured. (The current implementation is nommu, security is
not exactly job 1 there. :)
But a simple chip aiming at a "sweet spot" of as much functionality as
you can fit into a reasonable amount of complexity (in our case "the
whole SOC runs in an lx9 fpga" was an engineering constraint for a lot
of development) is a good starting point for a properly securable
system. Ours was based on a proven design that was started based on an
awful lot of research (learning from the first generation of RISC chips,
statistical analysis of instruction frequency in compiler output, etc.
Hitachi did an awful lot of research back in the day, and a lot of those
engineers had retired or gone into academia when we asked around and
were happy to have somebody "honoring the spirit" of their work...)
The various fab-time attacks are potentially mitigated by _diverse_
fabrication, with the ability to use everything from the old 150
nanometer fabs up through the cutting edge of... what are they up to,
does 9 nanometer actually work yet? (Last I heard the price/performance
sweet spot we were looking at was something like 45 nanometer fabs?
There's a knee in the curve, but it's not my area...)
> If you're going down that rabbit hole, you're going to play in the same
> league as NSA and NSA probably knows how Intel or AMD are really made.
My grandfather worked for the NSA for 40 years because he did
cryptography during world war II and afterwards they threatened to draft
him and put him on the ground in korea if he didn't volunteer for this
new organization they were creating, and then he couldn't get _out_
until some idiot tried to handoff intel to him in his hotel room when he
was being a humble civilian contractor "upgrading" the Iraqi phone
network in the 80's and nearly got him killed and _then_ his identity
was blown and they let him out.
My parents met working on the Apollo moon launches in florida. I grew up
on Kwajalein in the marshall islands because my father was working on
ICBM guidance and tracking systems during the cold war and that's where
they did test launches from/to. We moved to new Jersey when I was 10 so
he could help develop the Aegis phased array radar system.
There's a reason I've never taken a job that requires a security
clearance. I don't want to get that crap on me, and I prefer to stay in
the light (sunlight being the best disinfectant and all) and use peer
review and so on.
I'm sure all my devices are already exploited (when I'm in Japan my
phone battery lasts days, here in the US it goes dead in hours, it's
really annoying). But I also strongly suspect the majority of the US
security services disappeared up their own asses years ago (the snowden
thing was contractors giving security clearances to other contractors!)
and are now exclusively concerned with keeping their funding stream
unlimited, and the reason they want everybody's data is to blackmail
whoever winds up in politics 30 years from now with the porn they read
as teenagers so they rubber stamp the black budget requests. Meaning
they'll probably never use it for any other purpose (such as law
enforcement) because that would compromise their sources.
*shrug* Possibly less so these days, now they've got Godwin's Law
breathing down their necks. Dunno. It's Emu War territory and I can't
predict anything about it. So far
https://twitter.com/drvox/status/862369684838121473 sounds right.
> Now if you feel threatened by your secret services, you'd better buy a
> ticket to Moscow, one way.
Been there twice (contract with Parallels in 2010, their headquarters is
in Moscow). It was worth it to help bring container technology from
OpenVZ into the vanilla kernel, but I didn't feel safe and am SO glad I
never have to go back there again.
> Probably my fault but if you don't mind I'd really curious to know what
> kind of applications are you thinking about.
We're making better synchrophasors and hooking them up to the internet:
Electrical grids were designed around the idea of centralized generation
from which power flows in one direction to consumers, so you only had to
measure it at the generators and maybe the substations. But now we've
got solar and wind feeding power back in at the edges, and once you go
above around 3% on that the voltages go out of spec and people's
electronics get unhappy. So we're retrofitting the grid with sensors so
the whole thing can switch over to 100% solar and wind over the next
decade. Combine this with batteries and your more optimistic forecasters
expect peakers to go away around 2020 and 100% renewable base power by 2030.
Here's a stanford professor (Tony Seba) teaching a class in 2013, then
giving a book talk last year, then having his book talk analyzed by a
mutual fund in india earlier this year:
I could give about 30 other links if you're bored.
About a third of the ones on
https://www.youtube.com/channel/UCr81EUb2qVJVfmmlJMxEHVw/videos are very
good (the rest are environmentalism, not technology).
Anyway, the optimistic people are probably right because humans never
forecast exponential growth accurately. Moore's Law's been replaced by
Swanson's Law (solar panels were $76.67/watt in 1977, $0.36/watt in
2014, that's a curve that's hard for humans to keep up with and it seems
to be _accelerating_, and recently taking battery technology along for
the ride). The analogies people keep making are to cars displacing
horses (about a decade from 1% to 99%), digital cameras displacing film
(ditto), analog to digital phones (cell and voip)...
Speaking of Godwin's Law, did you notice how 80% of Russia's export
income is from oil and natural gas (without which they basically can't
even feed themselves), and the CEO of Exxon was happy to divest himself
of his stock assets when he joined the resident's cabinet? (Ordinarily a
CEO selling all his stock in the company is considered a bad sign, but
he found a way.)
So yes, the fossil fuel industry's like 1/6 of the economy, it's set to
dry up and blow away over the next decade, and it's taking the calm
dignified approach the RIAA and MPAA did when confronted with Napster.
(Add in https://landley.net/notes-2013.html#26-10-2013 and it's
interesting times indeed.)
> As a humble developer I understand the value of being open "turtles all
> the way down".
That one we only have the slides of:
(Apparently linuxcon had their cameras stolen one year, and the Linux
Foundation's response was to not record panels anymore in subsequent
years. We found out when our panel wasn't recorded. Luckily Tim Bird
still runs ELC despite the Linux Foundation nominally taking that over...)
> But some things have higher costs than just submitting a
> patch to the a project and can't be so easily shared.
> It would make more sense if companies where used to share VHDL source
> but I couldn't see any sign you're involving anyone else from the site
> or the ML (it seems you've few time left for marketing, or anything else).
The engineers have been busy working on a j-core ASIC, and doing other
board engineering for customer devices:
I admit we're stretched a bit thin. Startups. We haven't done a third
VHDL tarball source release because we want to get the _repository_ up
on github, but it was developed in mercurial as a series of nested
subrepos and converting it to a unified git repository turned into a
huge pain that ate several weeks of an engineer's time time and then
went back on the todo list.
(We also didn't keep the proprietary stuff and the open stuff quite
properly separated during development, and need to audit it after the
conversion. Auditing tarballs is easy, auditing the full repo history is
a bit more of a time sink.)
> That's because I think the main advantage of an open architecture for a
> humble developer like me would be a common ISA with no surprise, more
> competition, more "specialization" and probably once things start
> rolling faster innovation.
Superh exists. We're compatible with it. We backported 2 instructions
from sh3 to sh2 and added cmpxchg modeled after s360 (because sh2 didn't
have SMP and we do).
> Near monopoly in the x86 market and NDA, licenses, ARM veto on
> "improvements" when the margin in gaining performance from silicon and
> the ROI are shrinking don't look as a good premise for innovation.
I expect that was Jeff's take on it too, at least in part.
>> What was his query? (I pointed at Jeff's talk, did that not cover it?)
> Do you think there is any chance to see large deployment of Linux on a
> new architecture?
Yes, obviously. (I assume you mean other than the products we're making
based on it, which by themselves we're hoping to ship millions of
devices around the world?)
Your question is weird. "Do you think Athlon has chance because it
merely runs x86 instructions but isn't from Intel? After all, it's a new
architecture..." It's... not entirely a new architecture from its users'
SuperH development stalled because the 1997 asian economic crisis
stopped Hitachi from investing in its chip design business, and they
unloaded it a few years later (spinning off a new company called
"Renesas" in parnership with NEC or somebody). Renesas inherited the
designs but not the engineers who created them, and when Renesas put
together a new engineering team that tried to extend superh past
Hitachi's sh4, they failed to interest anybody in sh5 which cratered so
badly it basically didn't ship. Then Renesas gave up and created new
chips from scratch (with their own instruction sets), and kept milking
superh essentially unmodified until the patents expired. Then we started
poking at sh2 just as they end of lifed it (because the patents expired,
ergo no more money to be made from that IP).
Meanwhile sh chips continued to be widely used in things like the
japanese automotive industry, and last I checked Renesas still sells it
in the shmobile devices (which are arm+sh the same way qualcomm
snapdragon is arm+hexagon). It's a very nice design, which never
actually went away. The one company that owned the IP stopped investing
in it because
There was nothing wrong with the superh technology. There were business
decisions imposed by resource constraints that don't apply to j-core. We
point to sh2 being the chip in the sega saturn and sh4 in the dreamcast
because those were visible here in the US. Those were not the only uses
of this instruction set. :)
> BTW this is one of the many times I'm astonished about how such a small
> team can come up with such a complicated project. Kudos.
Jeff is really good. I'm mostly just a software guy. (I came in to do
the BSP and went off on a lot of tangents because startup.)
More information about the svlug