[svlug] j-core vs RISC-V

Rick Moen rick at linuxmafia.com
Tue May 9 08:43:32 PDT 2017


Quoting Ivan Sergio Borgonovo (mail at webthatworks.it):

> This makes it a bit clearer but I'm still wondering.
> Fortunately current simple processors could run a lot of things and you 
> could hope they will be able to run even more but it is painfully hard 
> and barely useful to live isolated.
> 
> eg. you build up your home server, usual stuff, file server, email, tor, 
> git repository. But then you access those files from your workstation 
> where you compile your software.
> 
> It is a good starting and could probably be a necessary one, but then 
> you'll have to delegate your trust at some point. That point will 
> theoretically neutralize all your effort to be so strict on openness.
> Theoretically even not being so strict will neutralize all your effort 
> to be safe.

I don't want to get into a long side-discussion at this point about
security risks, mitigation, avoiding single points of failure, and so
on -- which unfortunately would be required to examine this notion that
all of your computing is compromised if one machine has problematic
(black box) hardware subsystems. 

We would, in that theoretical, have to delve into what you mean by your
phrase 'delegate your trust'. 

Certainly, if the console machine in front of you has major security
weakneseses, you have a serious _potential_ problem depending on what
you do from that console.  If the console machine in front of you has
major security compromise and you fail to detect both that occurrence
and the secondary effects on other hosts, then you have a huge problem.  
But whether all of that badness necessarily happens depends on other
things, such as the nature of security tokens and how you use them.

And it certainly is not a foregone conclusion that potential security
risks or even a security breach at one of your hosts automatically
causes breaches at all the others.  If so, you're doing it wrong.

Debian Project's 2003 experience would be a case in point.  Despite
kernel-level exploits including the four main development servers, zero
compromise of the package repositories occurred, because of careful
handling of the signing keys.
http://linuxmafia.com/~rick/constructive-paranoia.html

Something I wrote in a long-ago article for IDG might useful in broad
terms:  'Prevention and detection are, of course, very good things, but
ideally they should be part of a better-rounded effort at risk
assessment and management. That should include damage reduction (what is
at risk?), defense in depth (how can we avoid having all our eggs in one
basket?), hardening (e.g., jumpering the SCSI drives read-only for some
filesystems, and altering Ethernet hardware to make promiscuous mode
impossible), identification of the attackers, and recovery from security
incidents.'



> RISC-V doesn't mandate you've to build a CPU with some closed part, you 
> may do so, it use BSD like open license...

Yes, that was clear.  However, the details of implementation are
entirely up to third parties, not the RISC-V standards people, and (to
my eyes), all of the listed implementations seemed likely to follow
usual patterns for embedded computing.  Again, if you hear of one that
breaks the pattern and eschews black-box subsystems, please speak up.
(Ah, I see below that you do.)

> ...and it is already not patent encumbered as j-core.

Unlikely.  https://en.wikipedia.org/wiki/Submarine_patent
The point about SuperH is that everything about all its designs up to
and including SH4 are old enough that all patents are expired,
_including_ ones not publicised.  (Patents are a thorny problem, to
summarise the summary of the summary.)

There are limits to this platform's appeal, obviously.  SH4 is 32-bit
and thus has memory-space limits that make it seem quaint in 2017.  The
follow-on SH5 was 64-bit, but gcc has now dropped support for it, unlike
the 32-bit CPUs.


> SiFive FE310 seems to be completely open.

At a quick glance, it appears so.  I will keep my eye on this.

> I haven't been able to understand if RISC-V can run Linux.

Appears to be a project underway.  Here's a status snapshot of tools
critical to the Debian port:

Toolchain upstreaming status

binutils: upstreamed (2.28 is the first release with RISC-V support)
gcc: upstreamed (7.1 is the first release with RISC-V support)
glibc: not upstreamed yet
linux kernel: not upstreamed yet
gdb: not upstreamed yet
qemu: not upstreamed yet


> At this point I'd say the major advantage of j-core over RISC-V could be 
> its ancestry that has pros (some software, the "Why recreate existing 
> architecture? in their 2015 session) and cons (older architecture).

Fairly said.  And the con of 'older architecture', as noted, provides the
advantage of 'zero patent problems', which certainly isn't everything
but is a significant thing.


> You're not replying to my question.

To be sure, explicitly not.  I implicitly said that, for me, whether a
particular CPU architecture ever sees large Linux deployment isn't an
interesting question, and explained why and you happen to hold that
same view, I see).  But, with luck, Rob (whom you asked) will field your
query.




More information about the svlug mailing list