[svlug] j-core vs RISC-V

Rob Landley rob at landley.net
Tue May 9 16:08:47 PDT 2017

On 05/09/2017 10:43 AM, Rick Moen wrote:
> Quoting Ivan Sergio Borgonovo (mail at webthatworks.it):
>> This makes it a bit clearer but I'm still wondering.
>> Fortunately current simple processors could run a lot of things and you 
>> could hope they will be able to run even more but it is painfully hard 
>> and barely useful to live isolated.
>> eg. you build up your home server, usual stuff, file server, email, tor, 
>> git repository. But then you access those files from your workstation 
>> where you compile your software.
>> It is a good starting and could probably be a necessary one, but then 
>> you'll have to delegate your trust at some point. That point will 
>> theoretically neutralize all your effort to be so strict on openness.
>> Theoretically even not being so strict will neutralize all your effort 
>> to be safe.
> I don't want to get into a long side-discussion at this point about
> security risks, mitigation, avoiding single points of failure, and so
> on -- which unfortunately would be required to examine this notion that
> all of your computing is compromised if one machine has problematic
> (black box) hardware subsystems. 

Oh there's all sorts of fun if you dig down a bit. There are
fabrication-time attacks you can do just by tweaking dopant and not even
moving wires (so taking your chip apart with Taiwan's
reverse-engineering equipment and examining it won't spot the changes):



That said, monitoring your fabrication process and doing so in a
non-five-guys country presumably helps a bit.

We're currently doing our first production run of "turtle boards" which
are basically a Raspberry Pi 2b with an Xilinx Spartan 6 LX25 FPGA for a
brain instead of an ARM processor. There are precisely two other pieces
of logic on these boards with any processing capability:

1) an 8-bit atmel boot processor that loads the bitstream from SPI flash
into the FPGA on power-up so it can start acting like j-core (or Risc-V
if you're into that)

2) a USB 2 hub chip. (A really stupid hardwired one which is _not_
vulnerable to the badusb exploit, jeff made sure.)

Everything else (ethernet, serial, hdmi, audio jack, sdcard, etc) is
wired straight into the FPGA (modulo standard electrical buffering) and
driven by your bitstream. This is as close to provably non-backdoored
hardware as we know how to make. :)

Of course https://www.bunniestudios.com/blog/?p=3554 is left as an
execrise for the reader...

> Certainly, if the console machine in front of you has major security
> weakneseses, you have a serious _potential_ problem depending on what
> you do from that console.  If the console machine in front of you has
> major security compromise and you fail to detect both that occurrence
> and the secondary effects on other hosts, then you have a huge problem.  
> But whether all of that badness necessarily happens depends on other
> things, such as the nature of security tokens and how you use them.

Your average laptop has something like 7 different processors in it and
they're all exploitable:




And so on and so forth...

The most surprising email I got back when I was maintaining busybox was
from the administrator of the big wargames-style display at Cheyenne
Mountain (which was still open at the time) saying it was running
busybox. When I went "dear FSM _why_" he said they had to audit every
line of code that goes into those systems and they'd rather audit 1
megabyte of busybox than 110 megabytes of equivalent gnu crap. (Can't

(My opinion is you don't secure a system by _adding_ stuff. You secure a
system by _removing_ stuff.)

> Yes, that was clear.  However, the details of implementation are
> entirely up to third parties, not the RISC-V standards people, and (to
> my eyes), all of the listed implementations seemed likely to follow
> usual patterns for embedded computing.  Again, if you hear of one that
> breaks the pattern and eschews black-box subsystems, please speak up.
> (Ah, I see below that you do.)
>> ...and it is already not patent encumbered as j-core.
> Unlikely.  https://en.wikipedia.org/wiki/Submarine_patent
> The point about SuperH is that everything about all its designs up to
> and including SH4 are old enough that all patents are expired,
> _including_ ones not publicised.  (Patents are a thorny problem, to
> summarise the summary of the summary.)

The existing implementation of superh is prior art. We already had
various organizations demand license fees from us, and Jeff did the
"bring it, in writing, in a 'legally prejudicial to your case if you
don't file the suit in a timely manner' way" dance.

This isn't his first hardware company. :)

> There are limits to this platform's appeal, obviously.  SH4 is 32-bit
> and thus has memory-space limits that make it seem quaint in 2017.  The
> follow-on SH5 was 64-bit, but gcc has now dropped support for it, unlike
> the 32-bit CPUs.

We've got a 64-bit instruction set speced out on
http://j-core.org/roadmap.html (which desperately needs updating but
we've been busy), but it's not shmedia (I.E. sh5). It's shmobile (sh4)
with a new mode bit and tweaks, ala x86->x86_64.

The reason sh5 failed is related to the reason SuperH rolled to a stop
in the first place: the 1997 Asian Economic Crisis made Hitachi spin off
its chip design efforts a few years later, which is where Renesas came
from. (I think they partnered with NEC?) But Renesas only inherited the
technology, not the engineers (who stayed with Hitachi). So their
attempts at further development after sh4 were so uninteresting they
never shipped, and they went off and did new chip designs from scratch
and lost interest in the hitachi stuff they'd inherited and couldn't
really maintain ("not invented here")...

Which is why 17 years later there isn't a thicket of newly filed
submarine patents on minor tweaks to the old stuff (doing the exact same
circuit but on a 45 nanometer process, or with a slightly larger cache,
or...) because they didn't put any new investment in this old stuff they
had no interest in.

Then when we announced j-core they went "There's interest in this?
what?" and started staffing it up again, out of reflex action as far as
we can tell.

Eh, politics. Not my area.

>> At this point I'd say the major advantage of j-core over RISC-V could be 
>> its ancestry that has pros (some software, the "Why recreate existing 
>> architecture? in their 2015 session) and cons (older architecture).
> Fairly said.  And the con of 'older architecture', as noted, provides the
> advantage of 'zero patent problems', which certainly isn't everything
> but is a significant thing.

There are a bunch of old architectures going out of patent (due to the
whole "RISC will unseat x86!" wars of the late 80's and early 90's).
Jeff chose SuperH because of instruction set density. The ELC talk
covers that. (There are slides and youtube video available if you're

There's also a sweet spot where it's old enough to be really simple (and
thus small and fast, before the whole Pentium 4 30+ pipeline stages
disease kicked in), but new enough to be a _second_ generation RISC
where they learned to design an instruction set that didn't give the
compiler migranes.

>> You're not replying to my question.
> To be sure, explicitly not.  I implicitly said that, for me, whether a
> particular CPU architecture ever sees large Linux deployment isn't an
> interesting question, and explained why and you happen to hold that
> same view, I see).  But, with luck, Rob (whom you asked) will field your
> query.

What was his query? (I pointed at Jeff's talk, did that not cover it?)

J-core has a mailing list if you're curious. Linked from the web page
and probably more on topic than here. :)


More information about the svlug mailing list