219 points by chmaynard 4 days ago | 50 comments
Hendrikto 4 days ago
Sounds promising. Just like EEVDF, this both simplifies and improves the status quo. Does not get better than that.
amelius 4 days ago
Why isn't the level of preemption a property of the specific event, rather than of some global mode? Some events need to be handled with less latency than others.
btilly 4 days ago
To stand ready to reliably respond to any one kind of event with low latency, every CPU intensive program must suffer a performance penalty all the time. And this is true no matter how rare those events may be.
zeusk 4 days ago
xedrac 4 days ago
btilly 4 days ago
zeusk 4 days ago
Someone 4 days ago
Not necessarily. The CPU can do it in hardware. As a simple example, the 6502 had separate “interrupt request” (IRQ) and “non-maskable interrupts (NMI) pins, supporting two interrupt levels. The former could be disabled; the latter could not.
A programmable interrupt controller (https://en.wikipedia.org/wiki/Programmable_interrupt_control...) also could ‘know’ that it need not immediately handle some interrupts.
themulticaster 4 days ago
By the way, NMI still exist on x86 to this day, but AFAIK they're only used for serious machine-level issues and watchdog timeouts.
wizzwizz4 4 days ago
refulgentis 4 days ago
Generally, any given software can be done in hardware.
Specifically, we could attach small custom coprocessors to everything for the Linux kernel, and Linux could require them to do any sort of multitasking.
In practice, software allows us to customize these things and upgrade them and change them without tightly coupling us to a specific kernel and hardware design.
btilly 4 days ago
This doesn't mean that moving logic into hardware can't be a win. It often is. But we should also expect that what has tended to wind up in software, will continue to do so in the future. And that includes complex decisions about the priority of interrupts.
wizzwizz4 4 days ago
sroussey 4 days ago
Wait, what? I’ve been out of compiler design for a couple decades, but that definitely used to be a thing.
namibj 4 days ago
wizzwizz4 3 days ago
amluto 4 days ago
RandomThoughts3 4 days ago
There are two different notions which are easy to get confused about here: when a process can be preempted and when a process will actually be preempted.
Potential preemption point is a property of the scheduler and is what is being discussed with the global mode here. More preemption points mean more chances for processes to be preempted at inconvenient time obviously but it also means more chances to properly prioritise.
What you call level of preemption, which is to say priority given by the scheduler, absolutely is a property of the process and can definitely be set. The Linux default scheduler will indeed do its best to allocate more time slices and preempt less processes which have priority.
biorach 4 days ago
jabl 4 days ago
> SCHED_IDLE, SCHED_BATCH and SCHED_NORMAL/OTHER get the lazy thing, FIFO, RR and DEADLINE get the traditional Full behaviour.
weinzierl 4 days ago
Is this about kernel tasks, user tasks or both?
GrayShade 4 days ago
fguerraz 4 days ago
temac 4 days ago
simfoo 4 days ago
biorach 4 days ago
> There is also, of course, the need for extensive performance testing; Mike Galbraith has made an early start on that work, showing that throughput with lazy preemption falls just short of that with PREEMPT_VOLUNTARY.
spockz 4 days ago
hifromwork 4 days ago