broonie is currently certified at Journeyer level.

Name: Mark Brown
Member since: 2000-04-01 18:19:43
Last Login: 2008-02-16 16:48:52

FOAF RDF Share This

Notes:

If you know me for doing anything, it's probably maintaining a few Debian packages (Leafnode, zlib, helping with nis and a bunch of Fortran related packages). I currently work for Wolfson Microelectronics on drives for their chips.

Projects

Recent blog entries by broonie

Syndication: RSS 2.0

We show up

It’s really common for pitches to managements within companies about Linux kernel upstreaming to focus on cost savings to vendors from getting their code into the kernel, especially in the embedded space. These benefits are definitely real, especially for vendors trying to address the general market or extend the lifetime of their devices, but they are only part of the story. The other big thing that happens as a result of engaging upstream is that this is a big part of how other upstream developers become aware of what sorts of hardware and use cases there are out there.

From this point of view it’s often the things that are most difficult to get upstream that are the most valuable to talk to upstream about, but of course it’s not quite that simple as a track record of engagement on the simpler drivers and the knowledge and relationships that are built up in that process make having discussions about harder issues a lot easier. There are engineering and cost benefits that come directly from having code upstream but it’s not just that, the more straightforward upstreaming is also an investment in making it easier to work with the community solve the more difficult problems.

Fundamentally Linux is made by and for the people and companies who show up and participate in the upstream community. The more ways people and companies do that the better Linux is likely to meet their needs.

Syndicated 2016-06-10 22:59:36 from Technicalities

OpenTAC sprint

This weekend Toby Churchill kindly hosted a hacking weekend for OpenTAC – myself, Michael Grzeschik, Steve McIntyre and Andy Simpkins got together to bring up the remaining bits of the hardware on the current board revision and get some of the low level tooling like production flashing for the FTDI serial ports on the board up and running. It was a very productive weekend, we verified that everything was working with only few small mods needed for the board . Personally the main thing I worked on was getting most of an initial driver for the EMC1701 written. That was the one component without Linux support and allowed us to verify that the power switching and measurement for the systems under test was working well.

There’s still at least one more board revision and quite a bit of software work to do (I’m hoping to get the EMC1701 upstream for v4.8) but it was great to finally see all the physical components of the system working well and see it managing a system under test, this board revision should support all the software development that’s going to be needed for the final board.

Thanks to all who attended, Pengutronix for sponsoring Michael’s attendance and Toby Churchill for hosting!

IMG_2194
IMG_20160515_192336628

Syndicated 2016-05-16 13:20:01 from Technicalities

Expedient ABIs

The biggest change we’ve seen in the Linux kernel for ARM over the past few years has been the transition to providing descriptions of the hardware in systems via device tree. This splits out the description of the devices in the system that can’t be automatically enumerated from the kernel into a separate binary instead of being part of the kernel binary. Currently for most systems that are actively used upstream the device tree source code is kept in the kernel but the goal is to allow people to use device trees that are distributed separately to the kernel, especially device trees that are shipped as part of the board firmware. This is something that other platforms have done for a long time, PowerPC Macs and Sun SPARC systems use device tree as the mechanism for describing the hardware to the operating system.

One consequence of this desire to allow the kernel and device tree to be shipped separately is that the device tree becomes an ABI. This is a really big change for people working in the embedded and consumer electronics areas where ARM has been most widely deployed, it means that any descriptions of the hardware need to be something that can stand the test of time. Anything we release is something we have to expect to carry code for indefinitely. When everything was done as part of the kernel binary we could easily do something that doesn’t quite represent the hardware with the intention of replacing it later, now it is much harder to do that.

An example of this is the SAW in Qualcomm SoCs. This is a block in the SoC which provides control of some regulators used for the CPU cores in the PMIC in very low power states and also allows the CPU to control the regulator with fast memory mapped registers rather than the slower buses used to control the PMIC. Unfortunately it doesn’t fully replace direct access to the PMIC, it supports a subset of the control we need for the PMIC but not all of it. We could represent the SAW as an independent regulator but from a system integration point of view it is functioning as an extra control interface for the external PMIC and if we want to use the extra functionality that is only available via direct access to the PMIC we need to consider that and represent the SAW as an extension of it. If we don’t need that extra PMIC functionality at the current time this means we need to do some extra work to make sure we describe the PMIC before we can use the SAW even if we have no intention to use anything other than the SAW.

Now, few if any people are actually using the device tree as an ABI at present so those working on enabling platforms often forget about the requirement and find it an obstacle to getting things done – they have pressure to get things done, they don’t have quite the same pressure to make sure that attention is paid to device tree compatibility so it can easily get forgotten. Over time this may change, especially if people start to take advantage of the device tree as an ABI that become more and more important, but for now if we want to enable that in the future it’s something we have to actively think about and work on, accepting that this means we won’t always be able to do the most expedient thing.

Syndicated 2016-02-20 17:19:41 from Technicalities

Performance problems

Just over a year ago I implemented an optimization to the SPI core code in Linux that avoids some needless context switches to a worker thread in the main data path that most clients use. This was really nice, it was simple to do but saved a bunch of work for most drivers using SPI and made things noticeably faster. The code got merged in v4.0 and that was that, I kept on kicking a few more ideas for optimizations in this area around but that was that until the past month.

What happened then was that for whatever reason people started picking up v4.0 and using it in production more. On some systems people started seeing problems when there was heavy SPI flash usage, often during things like distribution installation. In some cases the lockup detector fired, but the most entertaining error was that on Marvell Orion systems (which are single core) when the flash was being heavily used the SATA controller started having trouble handling interrupts. These problems all bisected down to the key commit in that series, 0461a4149836c79 (spi: Pump transfers inside calling context for spi_sync()).

The problem is that there are a number of widely deployed SPI controllers out there that don’t support DMA and instead require the CPU to explicitly read and write everything sent to and from registers in the controller. To make matters worse these accesses to the controller will usually take many CPU cycles to complete, each one stalling the CPU while they happen. This is fine for short transfers or if the CPU has nothing else to do but on a busy multitasking system it’s an issue. Before the optimization the switches between the worker thread interacting with the hardware and the thread initiating the SPI operations provided breaks in this activity which allowed other things to switch in. Unfortunately when we optimize those away then if there’s a lot of work for the controller being done from one thread then that thread can run for a long time without pause. The fix for affected drivers if there are no less CPU intensive ways of driving the hardware is to add some explicit sleeps into the driver itself, either at the end of the transfer_one() or perhaps in an unprepare_message() function.

In a way I was quite pleased to see this, it was a clear demonstration that the optimization was having the intended effect though obviously users of affected systems will not find that so comforting. It’s not the first time that making things faster or fixing a bug has revealed an underlying problem, I’m sure it won’t be the last.

Syndicated 2016-02-13 00:01:12 from Technicalities

Maintaining your email

One of the difficulties of being a kernel maintainer for a busy subsystem is that you will often end up getting a lot of mail that requires reading and handling which in turn requires sending a lot of mail out in reply. Some of that requires thought and careful consideration but a lot of it is quite routine and (perhaps surprisingly) there is often more challenge in doing a good job of handling these routine messages.

For a long time I used to hand write every reply I sent but the problem with doing that is that sending the same message a lot of times tends to result in the messages getting more and more brief as the message becomes routine and practised. Your words become more optimised and if you’ve stopped thinking about the message before you’ve finished typing it then there’s a desire to finish the typing and get on to the next thing. This is I think a lot of the reputation that kernel maintainers have for being terse and unhelpful comes from – messages that are very practised for someone sending them all the time aren’t always going to be obvious or helpful for someone who’s not so intimately familiar with what’s going on. The good part of it is that everyone is getting a personalised response and it’s easy to insert a comment about that specific situation when you’re already replying but it’s not clear that the tradeoff is a good one.

What I’ve started doing instead for most things is keeping a set of pre-written paragraphs for common cases that I can just insert into a mail and edit as needed. Hopefully it’s working well for people, it means the replies are that bit more verbose than they might otherwise be (mainly adding an explanation of why a given thing is being asked for) but can easily be adapted as needed. The one exception is the “Applied, thanks” mails I used to send when I apply a patch (literally just saying that). Those are now automatically generated by the script I use to sync my local git repository with kernel.org and very much more verbose:

From: Mark Brown <broonie@kernel.org>
To: ${CCS}
Cc: ${LIST}
Subject: ${SUBJECT}
In-Reply-To: ${MSGID}

The patch

   ${TITLE}

has been applied to the ${REPO} tree at

   ${URL} ${BRANCH}

All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.

You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.

(unfortunately this bit seems to be something that it’s worth pointing out)

If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.

Please add any relevant lists and maintainers to the CCs when replying
to this mail.

Thanks,
Mark

(the script does try to CC relevant lists). As well as giving people more information this also means that the mails only get sent out when things actually get published to my public repositories which avoids some confusion that used to happen sometimes with people getting my replies before I’d pushed, especially when I’d been working with poor connectivity as often happens when travelling. On the down side it’s very much an obvious form letter which some people don’t like and which can make people glaze over.

My hope with this is to make things easier on average for patch submitters and easier for me, feedback on the scripted e-mails appears to be good thus far and the goal with the pasted in content is that it should be less obvious that it’s happening so I’d expect less feedback there.

Syndicated 2016-02-09 17:36:53 from Technicalities

136 older entries...

 

broonie certified others as follows:

  • broonie certified joey as Master
  • broonie certified cas as Journeyer
  • broonie certified vicious as Journeyer
  • broonie certified bombadil as Journeyer
  • broonie certified lupus as Journeyer
  • broonie certified bribass as Journeyer
  • broonie certified vincent as Journeyer
  • broonie certified apenwarr as Journeyer
  • broonie certified wichert as Master
  • broonie certified espy as Journeyer
  • broonie certified doogie as Journeyer
  • broonie certified hands as Journeyer
  • broonie certified branden as Journeyer
  • broonie certified netgod as Journeyer
  • broonie certified knghtbrd as Journeyer
  • broonie certified Joy as Journeyer
  • broonie certified rse as Master
  • broonie certified exa as Apprentice
  • broonie certified Stevey as Journeyer
  • broonie certified moray as Journeyer
  • broonie certified skx as Journeyer

Others have certified broonie as follows:

  • joey certified broonie as Journeyer
  • lordsutch certified broonie as Journeyer
  • branden certified broonie as Journeyer
  • cech certified broonie as Journeyer
  • lazarus certified broonie as Journeyer
  • Joy certified broonie as Journeyer
  • fxn certified broonie as Journeyer
  • Jordi certified broonie as Journeyer

[ Certification disabled because you're not logged in. ]

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!

X
Share this page