[svlug] SSD was: Slides from the GoLUG meeting

Ivan Sergio Borgonovo mail at webthatworks.it
Thu Jun 2 16:56:48 PDT 2016



On 06/02/2016 11:29 PM, Rick Moen wrote:
> Quoting Ivan Sergio Borgonovo (mail at webthatworks.it):

>> It seems to be part of the past. When I bought my 850 PRO I was happy
>> enough I didn't have to bother about wearing so I didn't split my
>> stuff over platters and ssd. Having /var and ~/ on ssd made a so BIG
>> improvement on performance I was already in the "I don't care, it just
>> works" mood. But before deciding I didn't have to worry about wearing
>> I spent a bit of time googling and I remember I read something in the
>> line "now the kernel knows and it can do a much better job than you".

> But in this case the kernel doesn't.

oh it does.
https://www.kernel.org/doc/Documentation/block/cfq-iosched.txt

"CFQ has some optimizations for SSDs and if it detects a non-rotational
media which can support higher queue depth (multiple requests at in
flight at a time), then it cuts down on idling of individual queues and
all the queues move to sync-noidle tree and only tree idle remains. This
tree idling provides isolation with buffered write queues on async tree."

I don't know if it is enough to gain a significant performance advantage 
over the once suggested deadline, but all the stuff I could find about 
tuning the scheduler dates back to 2013 while this is fresh linux doc 
from the uber geeks.

You may be still right since there is no direct comparison between 
deadline and cfq for SSD and the benchmarks I've found are too old if 
you had to relay on benchmark to chose.

Considering performance and energy savings it makes a lot of sense 
facebook, amazon, google and the like are moving to SSD and since now 
they buy more gears than people buying desktop pcs I suspect it won't 
take long to see full automagic tuning in Linux.

> Unless you have some distro logic to overcome the default, the kernel
> will use the CFQ scheduler for all mass storage devices, even ones where
> it's objectively slowing down the I/O and wasting CPU cycles.

> Some day, the kernel might gain autorecognition logic to apply the best
> scheduler automagically, but for now my understanding is it doesn't.
> Clearly this doesn't make a _huge_ difference, but the point is that
> it's a one-time tweak that improves your system from that time forward.
>
>
>> On my system the sensible things have smart default and they have been
>> such for some years I think and they have been moved there for everyone,
>> not just for people using SSD.
>
> Well, my own perspective is driven by my current project to plan and
> build my new server, which uses two 128GB Samsung SSDs as a RAID1 md
> pair.  Putting /tmp, /var/run, and /var/lock into tmpfs, to pick some of
> the low-hanging fruit, seems like an easy win -- and to my knowledge
> that is not default on any Linux distribution.

Is there anything worth considering other than sid?
Well, /run and /run/lock were on tmpfs already in wheezy.
I didn't get how systemd manage /tmp but it seems it puts it in tmpfs.
I don't know how to check.

> Note to Mr. Litt about his short-lived hard drives:  Just as a data
> point, I've been getting an average of around a decade of service life
> out of 24x7 usage in my servers.

Yep, pretty suspicious.
I follow your same suggestion at home for boxes that deserve attention 
but even on neglected ones I've been able to achieve a much longer 
lifetime than 4 years on average.

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it




More information about the svlug mailing list