Older blog entries for dreier (starting at number 50)

Aloha Means Goodbye

Today is my last day at Cisco.

Topspin logo

A little more than 10 years ago, in January 2001, I joined a small startup called Topspin Communications.  We weren’t saying much publicly about what we were doing, but the idea when I joined was to build a super-high-performance box for dynamic web serving, with web app blades, TCP offload blades, storage blades and SSL blades.  I was in charge of the SSL blade.  However, early 2001 was when it became clear that the bubble was well and truly bursting, and it started to become clear the we weren’t going to have enough customers if we actually built our box, so we abandoned that product.  Shortly after this decision, I got a call from the salesman from the company whose encryption chip we had selected for the SSL blade, telling me that they had decided not to build the encryption chip after all.  I remember thinking how upset I would have been if he had called a week earlier, when we were still planning a product around the chip.

After a few months of flailing around searching for a product direction (not the most fun time in Topspin’s history), we decided to focus on InfiniBand networking gear.  Initially, we focused on connections from servers on an InfiniBand fabric to existing Ethernet and Fibre Channel networks, and thus was born the IGR — InfiniBand Gateway Router — aka Buzz (“To InfiniBand and Beyond”):

Buzz

This first chassis was pretty far from being a real product: it had only 1X IB ports (2 Gbps!) and was built using Mellanox MT21108 “Gamla” chips — pretty far from a shippable product. Heroic hardware reworks and software hacks were done just the get the system booting; for example, somehow I added enough IB support to PPCBoot for the line cards to load a kernel from the controller over InfiniBand directed route MADs.

Still, it was enough to get companies like Dell and Microsoft to take us seriously (which helped us raise another $30 million in the summer of 2002).  Keep in mind that this was during the time that everyone thought InfiniBand was going to be huge, and Microsoft was planning on having IB drivers in Windows Server 2003.  In fact we lugged some prototypes and emulators built on PCs up to Washington State to do interoperability testing and debugging with the Windows driver developers, and even watch Windows kernel developers at work.

When we were designing the next version of this box, one big decision was what 4X IB adapter chip to use inside.  The choices were to play it safe with IBM Microelectronics, or to gamble on a startup, Mellanox, who was making bold performance promises.  Luckily, we chose Mellanox, since the “safe” choice, IBM, canceled their IB products after struggling to make them work at all.  Mellanox’s first spin of their chip worked — it was an amazing experience to have a real 4X adapter that “just worked” in our lab after all the screwing around with half-baked 1X products that we had gone though (although we did spend plenty of time debugging the driver and firmware for that Tavor adapter).

We worked hard on getting to a real product, and in November 2002, we were able to introduce the Topspin 360, which had 24 4X IB ports, 12 standard IB module slots (each could hold either a 4-port 1G Ethernet gateway or a 2-port Fibre Channel gateway) as well as one very cool bezel design:

Topspin 360

In engineering, we followed the 360 with the “90 in 90″ challenge and built the Topspin 90 in only 90 days.  I was able to get IPoIB working on the 90′s controller, in spite of having only a primitive IB switch and no host adapter available.  The Topspin 90 was introduced in January 2003:

Topspin 90

The engineering team spent the rest of 2003 building the Topspin 120 24-port switch (another switch chip to get IPoIB working on), and a new 6-port Ethernet gateway.  The Ethernet gateway was pretty cool — for the first 4-port Ethernet gateway, we used a PowerPC 440GP along with a Mellanox HCA and some Intel NICs and did all the forwarding between Ethernet and IPoIB in software.  Between PCI-X and CPU bottlenecks, we were a bit performance limited.  The 6-port gateway used a Xilinx Virtex 2 FPGA with our own InfiniBand logic, and did all the forwarding in hardware, so we were able to handle full line rate of minimum-sized packets in both directions on all 6 Ethernet ports–and in 2003, 12 Gbps of traffic was an awful lot!

Somewhere along the way, it became clear that operating systems (aside from borderline-irrelevant proprietary Unixes like Solaris and HP-UX) would not include InfiniBand drivers out of the box; Microsoft dropped their plans for IB drivers, and the open source Linux project stalled.  It became clear that if we wanted anyone to buy InfiniBand networking gear, we would have to take care of the server side of things too, and so we started working on a host driver stack.  Luckily, at the very beginning of our InfiniBand development in 2001, we made the decision to use Linux on PowerPC rather than VxWorks as our embedded OS.  That meant we had a lot of Linux InfiniBand driver code from our switch systems that we could adapt into host drivers.

At first, we distributed our drivers as proprietary binary blobs, which meant a lot of pain for us building our drivers for every different kernel flavor on every distribution our customers used, and which also meant a lot of pain for our customers who wanted to mix and match IB gear from different vendors.  Clearly, for IB to work everyone had to agree on an open source stack, and after a lot of arguing and political wrangling that I’ll skip over here, the OpenIB Alliance was formed, and we started working on InfiniBand drivers for upstream inclusion in Linux.

OpenIB Alliance

The starting point of all the different vendor stacks that got released as open source was not particularly good, and although a lot of the community was in denial about it, it was clear to me that we would have to start from scratch to get something clean enough to go upstream.  Around February 2004, I was trying to optimize IPoIB performance, and I got so frustrated trying to wade through all the abstraction layers of the Mellanox HCA driver that I decided I would try to write my own drastically simpler driver, and I started working on something I called “mthca”.

By May 2004, I had mthca working enough to run IPoIB and I decided to announce it publicly.  This led to another series of flamewars but also enough encouragement from people I considered sane that I continued working on a stack built around mthca, and by December 2004 we had something good enough to go upstream.  That was really the start of a lot of great things, and I’m really proud of my role helping to maintain the Linux stack; today we have iWARP support, eight different hardware drivers, IPoIB, storage protocols, network file protocols, RDS; InfiniBand is used in more than half of the Top 500 supercomputers, etc.  And I don’t think any of that happens without IB support being upstream.

On the hardware side of things, we continued building things like the Topspin 270 96-port switch (1.5 Tbps of switch capacity!), switches for IBM BladeCenter, and so on.  In April 2005, Cisco bought Topspin, and when the deal closed in May 2005, I officially became a Cisco employee.  The Topspin IB products became the Cisco SFS product line, and for a brief glorious time, Cisco sold IB gear.

Unfortunately (for the SFS product line, at least), the IB market didn’t grow fast enough to become the billion-dollar market that Cisco looks for, and so Cisco decided to stop selling IB gear.  We went from announcing new products to announcing that we wouldn’t sell those products (and I don’t think an SFS 3504 ever actually shipped to a customer).  In fact, I personally gummed up the works a bit by putting in an internal order for an SFS 3504 as soon as it was orderable; a year later, the guy responsible for winding down the SFS product line had to track me down and have me cancel the order, which was the last one still on the books.

After we stopped working on InfiniBand stuff, we were bounced around between a few Cisco business units until we ended up working on x86 servers for the Cisco UCS product line.  For the past few years, I’ve been helping Cisco build rack servers while continuing to be the InfiniBand/RDMA maintainer for Linux.  I’ve helped build cool products such as the Cisco C460 server (some amusing things about the C460 project were debugging UEFI/BIOS  that made memtest86+ insta-reboot at a certain memory location, and figuring out why Linux wouldn’t boot on an x86 system with 1TB of RAM).  Cisco is a fun, rewarding place to work, and it’s amazing to still work every day with so many people from the old-school Topspin team, who have taught me so much over the years and become good friends along the way.

But since the Cisco acquisition, I’ve always missed the rush of working at a startup (hence my cri de coeur defending startups), and starting on Monday I’ll finally get back to that.  My new company is using InfiniBand, and continuing to maintain the upstream stack is part of my official job description, so nothing should be changing about my free software activities.  If my next job is half as good as Topspin, it should be an awesome ride.

My new company is still trying to keep things on the down-low, so I’m not going to put a link on my blog.  I can say that we still want to hire more great Linux developers, so if you’re interested, please get in touch with me!  We’re looking for people to work in-person in downtown Mountain View, CA (really downtown–not off in the Shoreline wilderness near the Googleplex, but actually in the same building as the Mozilla Foundation, near the train station, restaurants, etc).  As I said, working remotely isn’t an option, but if you aren’t currently in the area and want to move to Silicon Valley, we can help with relocation and visas (if you’re good enough, of course ;) ).

Syndicated 2011-01-21 18:00:00 from Roland's Blog

Missing the point on startups

I’ve been thinking about Ted Ts’o's recent posts about whether it’s possible to do engineering or work on technology at startups. I’m not going to argue that you can’t work on technology at Google or another big company (although articles like these do point out the difficulties). It would be easy to pick on Google’s failures and point out how many of their successes were actually acquired by buying a startup, but what I really wanted to talk about is how (IMHO) Ted is misunderstanding startups.

Ted’s central point seems to be:

But if your primary interest is to doing great engineering work, then you want go to company that has a proven business model.

Phrased so broadly, that’s bad advice. The reasoning that leads Ted to that bad advice starts with two contradictory misunderstandings of startups:

These days, the founder or founders will have a core idea, which they will hopefully patent, to prevent competitors from replicating their work, just as before. [...] most of the technology developed in a typical startup will tend to be focused on supporting the core idea that was developed by the founder.

and

Because if you talk to any venture capitalist, a startup has one and only one reason to exist: to prove that it has a scalable, viable business model.

In my experience, startups typically start with the founders deciding they’ve found a problem they can solve better, cheaper or faster — but it’s rare for founders to have an idea that’s developed enough to patent the whole thing. Ted I think implies that at a startup, the founders have figured everything out and everyone else is just filling in the details of the idea. To me, that seems completely backwards: if you go to a big company with an established business model, then almost certainly you’ll be working within the outline of that model (Innovator’s Dilemma and all that); at a startup, you’ll have to help the founders figure out just what the hell your company is supposed to be doing. And that gets to the second quote: a startup is an exercise in adapting the technology you’re building until you find the right business model. In other words, nearly every startup will get it wrong to start with and have to change plans repeatedly; the hope is that the technology you build along the way is valuable enough that you can survive until you find the right way to make money.

To give one example from personal experience, when I was at Topspin working on InfiniBand products, early in the InfiniBand hype cycle (around 2001 or so), we thought that every OS would soon ship with InfiniBand drivers, so we focused on building switches and other networking gear, without worrying about the hosts that would be connected to the network. It turned out that the first open source project for a Linux InfiniBand stack fizzled, and Windows also gave up on InfiniBand, so we ended up having to build an InfiniBand host stack — fortunately the embedded software from our switches already had most of the ingredients, and so we were able to pull it off by reusing our embedded work. (That Topspin host stack ended up getting released as free software, and it became one of the ingredients that went into the current Linux InfiniBand stack — and I ended up as the InfiniBand maintainer for the Linux kernel, while working for a startup)

So as I said before, I think it’s bad advice to suggest to someone that “real” engineering can only be done at a large company. Certainly there are huge differences between working at a big company and a small company, and I do believe that there are “big company people” and “small company people.” If your goal is to spend nearly all your time making incremental improvements in ext4, sure, it’s probably easier to do that at a company that is a big enough ext4 user for that work to pay off; on the other hand if you’d rather work on something that you’re making up as you go along and where your decisions shape the whole future of the company, then a startup is probably a better place for you. Similarly, Ted’s assertion

For most startups, though, open source software is something that they will use, but not necessarily develop except in fairly small ways.

misses the real distinction. There are plenty of startups where open source is the main focus (Cloudera, Riptano and Strobe are just a few that spring to mind; and I don’t mean to dis all of the others that I’m not namechecking here), and there are gazillions of big technology companies that are actively hostile to open source. So really, if you want to get paid to work on open source, make sure you go to an open source company; the size of the company is a completely orthogonal issue.

To summarize my advice: if you think you might be a small company person, don’t let Ted scare you away from startups. Oh, and happy holidays!

Syndicated 2010-12-24 04:18:26 from Roland's Blog

Transition to Linode complete

I recently moved the VPS that hosts this blog from Slicehost to Linode.  Both are very nice hosting providers that give you full control over a Xen virtual machine, including root access to the distribution of your choice and a slick web control panel, but right now at least, Linode gives you roughly twice the RAM as well as substantially more storage and bandwidth for the same price as Slicehost.

The main point of this post is really just to include my Linode referral link — if you’re going to sign up for Linode anyway, why not use my link and save me a few bucks on hosting?

Syndicated 2010-12-08 05:36:52 from Roland's Blog

Two notes on IBoE

I want to mention two things about IBoE.  (I’m using the term InfiniBand-over-Ethernet, or IBoE for short, for what the IBTA calls RoCE for reasons already discussed)

First, we merged IBoE support on mlx4 devices into the upstream kernel in 2.6.37-rc1, so IBoE will be in upstream kernel for the 2.6.37 release — one fewer reason to use OFED.  (And by the way, we used the term IBoE in the kernel)  The requisite libibverbs and libmlx4 patches are not merged yet, but I hope to get to that soon and release new versions of the userspace libraries with IBoE support.

Second, a while ago I promised to detail some of my specific critiques of the IBoE spec (more formally, “Annex A16: RDMA over Converged Ethernet (RoCE)” to the “InfiniBand Architecture Specification Volume 1 Release 1.2.1″; if you want to follow along at home, you can download a copy from the IBTA).  So here are two places where I think it’s really obvious that the spec is a half-assed rush job, to the detriment of trying to create interoperable implementations.  (Fortunately everyone will just copy what the Linux stack does if they don’t actually just reuse the code, but still it would have been nice if the people writing the standards had thought things through instead of letting us just make something up and hope it there are no corner cases that will bite us later)

  • The annex has this to say about address resolution in A16.5.1, “ADDRESS ASSIGNMENT AND RESOLUTION”:

    The means for resolving a GID to a local port address (i.e. SMAC or DMAC) are outside the scope of this annex. It is assumed that standard Ethernet mechanisms, such as ARP or Neighbor Discovery are used to maintain an appropriate address cache for RoCE ports.

    It’s easy to say that something is “outside the scope” but, uh, who else is going to specify how to turn an IB GID into an Ethernet address, if not the spec about how to run IB over Ethernet packets?  And how could ARP conceivably be used, given that GIDs are 128-bit IPv6 addresses?  If we’re supposed to use neighbor discovery, a little more guidance about how to coordinate the IPv6 stack and the IB stack might be helpful.  In the current Linux code, we finesse all this by assuming that (unicast) GIDs are always local-scope IPv6 addresses with the Ethernet address encoded in them, so converting a GID to a MAC is trivial (cf rdma_get_ll_mac()).

  • This leads to the second glaring omission from the spec: nowhere are we told how to send multicast packets.  The spec explicitly says that multicast should work in IBoE, but nowhere does it say how to map a multicast GID to the Ethernet address to use when sending to that MGID.  In Linux we just used the standard mapping from multicast IPv6 addresses to multicast Ethernet addresses, but this is a completely arbitrary choice not supported by the spec at all.

You may hear people defending these omissions from the IBoE spec by saying that these things should be specified elsewhere or are out of scope for the IBTA.  This is nonsense: who else is going to specify these things?  In my opinion, what happened is simply that (for non-technical reasons) some members of the IBTA wanted to get a spec out very quickly, and this led to a process that was too short to produce a complete spec.

Syndicated 2010-12-07 06:33:48 from Roland's Blog

Was it something I said?

I saw that OpenBSD 4.7 was released a couple of weeks ago.  I tried to help, I really did.

I used to have a fanless 600MHz VIA system with a cheapie Airlink 101 Wi-Fi card that I used as a home wireless router.  I ran OpenBSD on it for a few reasons — at the time I started, the OpenBSD wireless stack was ahead of Linux; their security obsession appealed to me; and not using Linux everywhere seemed like a fun thing to do.  It all worked pretty well, except that the wireless interface sometimes got stuck while forwarding heavy traffic.  For quite a while, I survived with hacks similar to this nutty crontab entry.

Eventually, though, I said to myself, “Self, you’re a kernel hacker.  You should be able to fix this driver.”  And indeed, after a couple of evenings of hacking, I figured out what was wrong and came up with a patch that improved things immensely for me.  The problem was that the driver was not written with a system as slow as mine in mind, and it got confused if more than one interrupt happened before it got a chance to service the first interrupt — you can read the patch description for full details.  Of course, being a good free software citizen, I sent my patch to the OpenBSD mailing lists so that it could be applied upstream.

Here’s where things went wrong.  I never heard from the author of this driver — I got no reply when I reported the original bug, and no replies to any mail I sent about my patch.  I did get several reports from other users who had the same problem and found that my patch fixed things for them as well, and finally another OpenBSD committer wrote, “Then if no one objects I’ll commit it tomorrow.“  Unfortunately, at this point the original driver author did seem to get interested — he sent private email to this committer (not copying the mailing list or me) objecting, and so we ended up with, “Objections were made. Apparently this patch only works for AP and does funky stuff to the hardware. So back to the drawing board on this one.“  As I said, all of my attempts to work directly with the driver author to find out what those objections were or how to improve the patch were ignored.

At this point I gave up on getting my patch upstream (and when I upgraded my wireless network to 802.11n, I chose a MIPS box running OpenWrt).

Syndicated 2010-06-03 21:42:27 from Roland's Blog

Rocky roads

I saw that the InfiniBand Trade Association announced the “RDMA over Converged Ethernet (RoCE)” specification today.  I’ve already discussed my thoughts on the underlying technology (although I have a bit more to say), so for now I just want to say that I really, truly hate the name they chose.  There are at least two things that suck about the name:

  1. Calling the technology “RDMA over” instead of “InfiniBand over” is overly vague and intentionally deceptive.  We already have “RDMA over Ethernet” — except we’ve been calling it iWARP.  Choosing “RoCE” is somewhat like talking about “Storage over Ethernet” instead of “Fibre Channel over Ethernet.”  Sure, FCoE is storage over ethernet, but so is iSCSI.  As for the intentionally deceptive part: I’ve been told that “InfiniBand” was left out of the name because the InfiniBand Trade Association felt that InfiniBand is viewed negatively in some of the markets they’re going after.  What does that say about your marketing when you are running away from your own main trademark?
  2. The term “Converged Ethernet” is also pretty meaningless.  The actual technology has nothing to do with “converged” ethernet (whatever that is, exactly); the annex that was just release simply describes how to stick InfiniBand packets inside a MAC header and Ethernet FCS, so simply “Ethernet” would be more accurate.  At least the “CE” part is an improvement over the previous try, “Converged Enhanced Ethernet” or “CEE”; not only does the technology have nothing to do with CEE either, “CEE” was an IBM-specific marketing term for what eventually became Data Center Bridging or “DCB.”  (At Cisco we used to use the term “Data Center Ethernet” or “DCE”)

So both the “R” and the “CE” of “RoCE” aren’t very good choices.  It would be a lot clearer and more intellectually honest if we could just call InfiniBand over Ethernet by its proper name: IBoE.  And explaining the technology would be a bit simpler too, since the analogy with FCoE becomes a lot more explicit.

Syndicated 2010-04-19 23:16:42 from Roland's Blog

First they laugh at you…

I found this article in “Network Computing” pretty interesting, although not exactly for the content.   Just the framing of the whole article, with Microsoft is touting the fact that they’ve managed to achieve performance parity with Linux on some HPC benchmarks as an achievement (and putting up a graph that shows they are still at least a few percent behind), shows how dominant Linux is in HPC.  Also, the article says:

The beta also reportedly includes optimizations for new processors and can deploy and manage up to 1,000 nodes.

So in other words Microsoft is stuck at the low end of the HPC market, only usable on small clusters.

Syndicated 2009-11-20 20:44:27 from Roland's Blog

Poor choice of words

I just got a marketing email from drugstore.com with the subject “Diaper blowout: save on Pampers, Huggies, Seventh Generation and more.”  If you’re not a parent and you don’t know why that’s unintentionally hilarious, just do a web search for “diaper blowout.”  Suffice it to say that when placed next to the word “diaper,” the word “blowout” does not usually connote slashed prices.

Syndicated 2009-08-12 00:22:21 from Roland's Blog

Lazyweb: best Verizon data card?

I currently have Verizon mobile data service with a Kyocera PC card, and it works well with recent distros using NetworkManager.  However, my venerable laptop is being replaced with a Lenovo X200, which has no PC card slot, so I’ll have to replace my Verizon data card as well.  According to the Verizon Wireless web site, my choices seem to be the NovaTel V740 for ExpressCard, or for USB the UTStarcom UM175 or the Novatel USB760.

My question for the lazyweb is: which data card/EV-DO modem should I get (assume that I’ll be running Linux 99.9% of the time when I use it)?  The ExpressCard is substantially more expensive and less flexible (since I may want to use this card on a system without an ExpressCard slot someday), so I’d probably go with one of the USB cards if it’s left up to me.  The USB760 doubles as a micro SD reader, which is not useful to me, and confounds things with a mass storage interface that probably just causes confusion, so my first choice would be the UM175 probably.  However if someone with first-hand knowledge knows why that’s a bad decision, I’d love to hear about it in the comments.

(And I put a very high value in not having to boot into Windows periodically to update cell tower locations or anything like that, for what it’s worth)

Syndicated 2009-06-10 00:15:30 from Roland's Blog

RDMA on Converged Ethernet

I recently read Andy Grover’s post about converged fabrics, and since I particupated in the OpenFabrics panel in Sonoma that he alluded to, I thought it might be worth sharing my (somewhat different) thoughts.

The question that Andy is dealing with is how to run RDMA on “Converged Ethernet.” I’ve already explained what RDMA is, so I won’t go into that here, but it’s probably worth talking about Ethernet, since I think the latest developments are not that familiar to many people.  The IEEE has been developing a few standards they collectively refer to as “Data Center Bridging” (DCB) and that are also sometimes referred to as “Converged Enhanced Ethernet” (CEE).  This refers to high speed Ethernet (currently 10 Gb/sec, with a clear path to 40 Gb/sec and 100 Gb/sec), plus new features.  The main new features are:

  • Priority-Based Flow Control (802.1Qbb), sometimes called “per-priority pause”
  • Enhanced Transmission Selection (802.1Qaz)
  • Congestion Notification (802.1Qau)

The first two features let an Ethernet link be split into multiple “virtual links” that operate pretty independently — bandwidth can be reserved for a given virtual link so that it can’t be starved, and by having per-virtual-link flow control, we can make sure certain traffic classes don’t overrun their buffers and avoid dropping packets.  Then congestion notification means that we can tell senders to slow down to avoid congestion spreading caused by that flow control.

The main use case that DCB was developed for was Fibre Channel over Ethernet (FCoE).  FC requires a very reliable network — it simply doesn’t work if packets are dropped because of congestion — and so DCB provides the ability to segregate FCoE traffic onto a “no drop” virtual link.  However, I think Andy misjudges the real motivation for FCoE; the TCP/IP overhead of iSCSI was not really an issue (and indeed there are many people running iSCSI with very high performance on 10 Gb/sec Ethernet).

The real motivation for FCoE is to give a way for users to continue using all the FC storage they already have, while not requiring every server that wants to talk to the storage to have both a NIC and an FC HBA.  With a gateway that’s easy to build an scale, legacy FC storage can be connected to an FCoE fabric, and now servers with a “converged network adapter” that functions as both an Ethernet NIC and an FCoE HBA can talk to network and storage over one (Ethernet) wire.

Now, of course for servers that want to do RDMA, it makes sense that they want a triple-thread converged adapter that does Ethernet NIC, FCoE HBA, and RDMA.  They way that people are running RDMA over Ethernet today is via iWARP, which runs an RDMA protocol layered on top of TCP.  The idea that Andy and several other people in Sonoma are pushing is to do something analogous to FCoE instead, that is, take the InfiniBand transport layer and stick it into Ethernet somehow.  I see a number of problems with this idea.

First, one of the big reasons given for wanting to use InfiniBand on Ethernet instead of iWARP is that it’s the fastest path forward.  The argument is, “we just scribble down a spec, and everyone can ship it easily.”  That ignores the fact that iWARP adapters are already shipping from multiple vendors (although, to be fair, none with support for the proposed IEEE DCB standards yet; but DCB support should soon be ubiquitous in all 10 gigE NICs, iWARP and non-iWARP alike).  And the idea that an IBoE spec is going to be quick or easy to write flies in the face of the experience with FCoE; FCoE sounded dead simple in theory (just stick an Ethernet header on FC frames, what more could there be?) it turns out that the standards work has taken at least 3 years, and a final spec is still not done.  I believe that IBoE would be more complicated to specify, and fewer resources are available for the job, so a realistic view is that a true standard is very far away.

Andy points at a TOE page to say why running TCP on an iWARP NIC sucks.  But when I look at that page, pretty much all the issues are still there with running the IB transport on a NIC.  Just to take the first few on that page (without quibbling about the fact that many of the issues are just wrong even about TCP offload):

  • Security updates: yup, still there for IB
  • Point-in-time solution: yup, same for IB
  • Different network behavior: a hundred times worse if you’re running IB instead of TCP
  • Performance: yup
  • Hardware-specific limits: yup

And so on…

Certainly, given infinite resources, one could design an RDMA protocol that was cleaner than iWARP and took advantage of all the spiffy DCB features.  But worse is better and iWARP mostly works well right now; fixing the worst warts of iWARP has a much better chance of success than trying to shoehorn IB onto Ethernet and ending up with a whole bunch of unforseen problems to solve.

Syndicated 2009-03-26 04:11:08 from Roland's Blog

41 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!