Kyletag:www.memoryhole.net,2017:/kyle/62017-01-04T14:16:09ZHigh catecholamine levelsMovable Type 3.34Fantastical 2 vs BusyCaltag:www.memoryhole.net,2017:/kyle//6.13962017-01-04T13:41:30Z2017-01-04T14:16:09ZI recently began using a Mac full-time in a corporate environment again (yay Amazon!), which is to say that I’m using it to talk to a Microsoft Exchange server for both email and calendar events. Of course, I have a...Kyle Wheeler
I recently began using a Mac full-time in a corporate environment again (yay Amazon!), which is to say that I'm using it to talk to a Microsoft Exchange server for both email and calendar events. Of course, I have a copy of Microsoft Outlook on my Mac, and that has some excellent features - for instance, it has a good understanding of rooms vs people as meeting attendees, and also has a great room-finder feature for when you're setting up new meetings. However, I found its interface for general-purpose email use _really_ slow. I was pretty surprised, as I've been using Outlook on Windows for the last 3 years, and it actually works relatively well (as long as you install something like "XKeymacs":http://www.cam.hi-ho.ne.jp/oishi/indexen.html to get working Emacs-like keybindings, like Ctrl-E to go to the end-of-line). Thus, I returned to Mail.app and Calendar.app (which used to be known as iCal). Mail.app works really well - better than I'd remembered, actually, and there are some really excellent plugins for it that add a lot of functionality to make it solidly a better email client than Outlook (there may be similar plugins for Outlook, I don't know). For reference, check out "Mail Act-On":http://smallcubed.com/mao/ and "MailTags":http://smallcubed.com/mt/, and anything else from "SmallCubed":http://smallcubed.com.
However, the Calendar app Apple provides is, well... I think "basic" is the kindest way of putting it. It can talk to an Exchange server and display things... mostly. It is unable to understand event categories, preferring to force users to move events between multiple calendars to get any sort of organization. They do it smoothly enough that it would almost be an acceptable solution EXCEPT that once you move an event out of the main Exchange calendar--even to another calendar on the same account on the same Exchange server--that event will no longer receive updates (e.g. if the organizer change the room). That's clearly unacceptable, so the search was on for a replacement! (Many people have experienced other problems with Apple's Calendar app, such as not receiving updates or deleting entries unexpectedly - I didn't use it long enough to have such issues.)
If you look around, there are several calendar apps out there, but two stand out as the most comprehensive, full-featured programs for a Mac (as of October 2016):
# Flexbits "Fantastical 2":https://flexibits.com/fantastical
# BusyMac "BusyCal":https://www.busymac.com/busycal/
They both cost $50 for the full version (which might be steep, but keep in mind you're using this tool daily). To decide for yourself which you prefer, you can download both of them for free and use them unhindered for 30 days, which is exactly what I did. And, honestly, I think they're both pretty close. Both are MUCH better than iCal, and the level of quality and the feature-sets are fairly similar. I think I'd be happy with either one.
After spending 30 days with both (first one, then the other), I prefer Fantastical by a hair. It's a bit prettier, and it has a strict sense of what a week is, so when I have it display the "week" view (which is my standard), I get a sense of progression through the week. (It sounds minor, but for a tool you use all the time, minor things matter.) Flexbits makes a lot of hay out of their "natural language event adding"--they did it first (BusyCal has something similar now), and they do it best (BusyCal mis-parses things sometimes, whereas Fantastical's parser is nearly magical). Fantastical can also modify which calendars get displayed/enabled based on your location, which is a neat trick that probably most folks won't care about. In general, Fantastical seems to have slightly more engagement with the latest and greatest MacOS features (for example, they pull your account information from the system database, whereas BusyCal requires you to enter that information again).
BusyCal is slightly more configurable (e.g. you can select between several appearance styles, beyond just changing the font), and it has a more flexible way of handling a large number of calendars (if you don’t have or don’t like having large numbers of calendars, this is just clutter rather than a benefit). BusyCal also has a dedicated panel for showing the details of events and editing them, whereas this requires a double-click in Fantastical (or hovering with a couple keys pressed, in their latest update), and that’s rather convenient. It has some features that Fantastical doesn’t that I consider mostly useless (like showing the weather for the next few days). However, it refuses to enforce a 1-week view, but rather simply displays a 5- or 7-day view. Some people would probably find that very helpful, because it provides a 5- or 7-day view into the future regardless of where you are in the week. I find it annoying.
Both display data from Gmail/Exchange/iCloud/etc. and seem to speak Exchange natively (whereas Apple's Calendar.app speaks Exchange with a thick accent and a limited vocabulary, if that makes sense), and both support event categories/tags (e.g. changing the color of an event without changing the calendar). Both also handle reminders/todos along with events, which is handy for those of us who are mostly satisfied with basic reminders--but neither's featureset on that front will wow anyone who is used to a dedicated GTD-type app. Both also have companion iOS apps--I haven’t tried them, but they exist. Both also have highly detailed calendar menu widgets--which, if you don't want to have the calendar app window open all the time, is a *huge* improvement over Apple's Calendar. Fantastical actually started as a menu-only calendar, but grew a full-featured app. BusyCal went the other direction, but both have ended up in a similar position.
They both also have similar drawbacks. Neither have the “Find a Room” feature that Outlook has, which is a real shame. And neither one has a really good "small month” view that can be displayed along with the reminders. Fantastical will display a very pretty and relatively useful small "month" view along with a list of upcoming events, but if you want to display reminders that will replace the month view (I'd rather it just replaced the list of upcoming events). BusyCal will display a tiny, nearly useless "month" view (equivalent to the @cal@ program on the terminal), but puts it under the (unnecessary, imho) list of the various calendars I'm subscribed to, so that showing both it and the Todo (Reminders) list scrunches the main display, thereby making events harder to read. This month view is so small, they could have put it elsewhere, to take real-estate from something else that can tolerate losing space better (like the reminders list). Also, both seem to have a slightly higher lag time (minutes) in displaying changes on the Exchange server than Outlook does - something I haven’t yet figured out, given that they all use Exchange push notification. Maybe Outlook updates the calendar on email pushes as well as calendar pushes explains the difference, I don’t know.
Anyway, hopefully that review helps someone. I ended up purchasing Fantastical 2, and after four months, I haven't regretted that decision even once.
Google, DKIM, and SpamAssassintag:www.memoryhole.net,2016:/kyle//6.13952016-10-26T14:47:55Z2016-10-26T15:19:11ZGoogle, once again, is doing something unfortunate with DKIM (see earlier posts on related subjects). This one is a little less their fault, just unfortunate. Specifically, Google Groups scan for spam and add a header to indicate which Group scanned...Kyle Wheeler
Google, once again, is doing something unfortunate with DKIM (see earlier posts on related subjects). This one is a little less their fault, just unfortunate.
Specifically, Google Groups scan for spam and add a header to indicate which Group scanned for spam (perhaps they do this to avoid redundant spam scans?). This header is @X-Spam-Checked-In-Group@. Once the email passes through the group and is distributed outside of Google (e.g. to Yahoo email addresses), they do the responsible thing and sign their email with a DKIM signature. This DKIM signature obeys all the rules, and includes in the signature the @X-Spam-Checked-In-Group@ header.
Now enter the recipient. If the recipient uses SpamAssassin to do their own spam filtering, something very unhelpful will happen. According to SpamAssassin's documentation:
bq. Note: before header modification and addition, all headers beginning with @X-Spam-@ are removed to prevent spammer mischief and also to avoid potential problems caused by prior invocations of SpamAssassin.
Thus, SpamAssassin removes the header that Google added, and in so doing, invalidates the DKIM signature.
This is not a problem as long as one of the following is true:
* DKIM Validation is done *before* SpamAssassin filtering is done AND the email will not need to have that signature re-validate (e.g. it is not forwarded or retrieved by any other DKIM-aware system)
* SpamAssassin is not permitted to modify the content of the email (e.g. it is being used as a boolean test OR the headers it generates are being saved and applied to the email afterward)
However, there are lots of ways in which this may not be true. For example, some people forward their email on to other systems, or have their email fetched into other systems (e.g. via fetchmail or via Gmail's POP3 fetching service).
The choice of header name is the unfortunate thing. If SpamAssassin had chosen to use @X-SpamAssassin-@ or some other more specific header prefix, or if Google had chosen a Google-specific prefix such as @X-Gmail-Spam-Checked-In-Group@, this could all have been avoided. But... here we are.
Noodler Black Inkstag:www.memoryhole.net,2016:/kyle//6.13942016-08-08T00:19:43Z2016-10-26T14:47:52ZI have been learning about fountain pens for a little while now, ever since being turned on to them by my parents and friends. My first fountain pen was a bit of a disappointing disaster, and I nearly wrote-off fountain...Kyle Wheeler
I have been learning about fountain pens for a little while now, ever since being turned on to them by my parents and friends. My first fountain pen was a bit of a disappointing disaster, and I nearly wrote-off fountain pens entirely as a result (pun intended). I really like super-fine-tip pens for note-taking, which is most of what I do when I'm writing things out by hand. I like pens like the Pilot Precise V5 and similar needle-nose rollerball pens, because they're fine-tipped and smooth, reliable writers. However, those pens can have problems with line quality - sometimes they leak a little too much ink on the paper, or sometimes when you're writing quickly the line gets unusually thin or even skips, and so forth. What really impressed me about good fountain pens, when I finally found a fountain pen I love (a Pilot Namiki Vanishing Point with an extra-fine nib) is the line quality: very thin, and extremely consistent line width. And now that I've used it, I'm spoiled, and the tiniest inconsistencies that I get from other pens now annoy me. It's like if you've gotten used to a 60Hz refresh-rate on your monitor; when you go back to a 30Hz refresh rate, the flicker is noticeable and annoying! Anyway, now that I have a pen I really love, I set about the next process of learning: ink!
Before we talk about ink, though, lets take a short detour into paper. Paper varies in quality a lot. Paper is generally made of some combination of plant pulp (e.g. wood pulp), cotton (or similar plant fiber), clay and other binding agents. Since the heart of paper is the fiber, which generally comes from plants, the main chemical component of paper is cellulose. The length of the fibers affects many of the important properties of the paper: short fibers are easier to come by (use more wood pulp, which helps make the paper inexpensive), but are more likely to get pulled out of the paper by a super-fine-tip pen or by a very wet ink (which makes the fiber swell a little and detach from their neighbors). Long fibers (e.g. with more cotton or similar) stay in the paper and generally make the paper tougher, but are more expensive to make (cotton is pricier than wood pulp). The clay and other binding agents help with the smoothness and brightness of the paper, and have an effect on ink penetration and propagation through the paper (they can limit it). Long-fiber paper is common when you need something more durable (e.g. many checks use long-fibers) or when you expect to use very fine-tip pens (e.g. it's standard note-paper in places like Japan), whereas short-fiber paper is common when you don't need the durability and want something a bit less expensive (it's common in standard copy-paper, especially in the United States). This is why many fountain-pen fans often have strong opinions on the kind of paper they want to use, or will choose ink knowing what kind of paper they will use.
The first thing many people look for in ink is the color, and preferences are all personal. I really prefer black ink. There's a formality and universality about it that I really like. However, not all black inks are created equal. There are all manner of considerations that I would not have initially thought of before I started educating myself. For example:
* *Blackness:* Some vary from black to dark-grey, and this can be affected by the flow of the pen, the width of the nib, as well as the properties of the paper. You can even get blacks that have a little of some other color, like a blue-black or a green-black, to give your writing a subtle (or not-so-subtle) hint of being "special".
* *Flow:* This was one of the problems with my first fountain pen. The ink flow wasn't consistent - sometimes it was quite wet and wrote well, sometimes it was dry and I had to tap and shake the darn thing to coax the ink to the tip of the nib. Pens that don't consistently write when you want them to are really frustrating! I also once got a cheap fountain pen as a freebie that had the opposite problem: the ink would occasionally come out in a big droplet for no particular reason (I think this was mostly the pen's fault, not the ink's fault, though). Inks have an impact here, based on their viscosity, how much they stick to the inside of the pen, their surface tension (which affects wicking), and so forth. So-called "dry" inks often have a lower surface tension, which means they don't wick as well. In practice, that means you can use-up the ink in the tip of the pen and it doesn't draw more ink down to the tip. "Wet" inks have a higher surface tension, and so wick more readily. Which type of ink is better depends on your pen and your writing velocity.
* *Feathering:* when ink enters the paper, it soaks into the paper. Depending on its viscosity and chemical reactions with the paper, it can wick along the fibers of the paper. The result somewhat depends on the paper - sometimes this means just a thicker line than you expected (common in short-fiber paper), sometimes it means that there are teeny tiny black lines stretching away from what you wrote (the length of these lines depends on the length of the fibers). It's the latter that gives the effect the name "feathering", but essentially it's referring to the ink going someplace other than where you put it. Feathering goes in three dimensions as well - when it goes down into the paper, it's sometimes called penetration or "bleeding". The properties that cause feathering are often the properties that improve ink flow - the better the ink wicks to the tip of the pen, usually the better it soaks into the paper.
* *Drying time:* This is of extra importance to left-handed writers, because they're often dragging their hand across the page, but inks that take a while to dry result in smears for all kinds of reasons, or even mean they'll soak into paper that's laid on top of what you wrote (i.e. when you turn the page). Faster drying inks also tend to be thinner in viscosity, which means they often tend to feather more.
* *Permanence:* how long will this color stay this color? Does it fade over time (e.g. does the pigment oxidize)? Lots of older inks turn grey or brown when left alone for a few decades. Does it fade in sunlight? Lots of inks are intended for writing in notebooks, and as such will be obliterated or altered by intense sunlight, such as on a sign or if you leave your notebook open on the windowsill.
* *Immutability:* Can this ink be removed from the paper? For instance, if I spilled water on it, will the ink run off? Or, if someone was trying to wipe your ink off of a check, e.g. with bleach, isopropyl alcohol, acetone, or some other method, how successful would they be? Bleach is a common tool for so-called "check-washers", and it's remarkably good at removing a lot of inks, as if they had never been there.
* *Viscosity:* Thicker inks may not flow as quickly, especially in a thin-tipped pen. This may be desirable, though, in a wide-tipped pen. Viscous inks may also take longer to dry, and viscosity can also reduce feathering or penetration of the paper. Also, viscosity can be used to keep the color very vibrant, by allowing you to lay down a thick layer of ink. Viscous inks have their place - the key is finding just the right balance for what you want to use it for.
* *Acidity:* Acidic inks can often be more immutable, because they eat into the paper a little to thoroughly bond with it. However, acidic inks can also cause staining or even cause the paper to fall apart over the long term, which makes acidic ink unsuitable for archival purposes. Additionally, acidic ink can corrode the internals of the pen, including the nib. Most older inks, and even some modern inks, are at least a little acidic - that's one reason quality fountain pen nibs are often made with (or are plated with) gold or stainless steel. Corrosion of the nib and the other metal pieces of a pen is a big consideration if you're looking at buying older (used) fountain pens. Also, it can slowly degrade the rubber and soft plastic, like the cap seal or the ink bladder (in ink-bladder pens).
* *Lubrication:* This affects several things, from how the pen "glides" over the paper to how smoothly the piston slides in the ink well of the pen (if you have a piston-based refill mechanism). The glide can be more of a personal preference thing - some people like their pen to glide over the paper like oil, some prefer a little bit (or a lot) of tactile feedback when writing. Personally, I used to be a fan of very smooth gliding over paper, but my fountain pen has just a little bit of tactile feedback and I've really grown to like that. I find it gives me just that little extra bit of control over the pen, and I miss it when using rollerball pens now.
The first ink I used in a fountain pen was Levenger Black, which came with the fountain pen I bought from them (an L-Tech 3.0). I really didn't like the ink and pen combination - it had flow problems, as I mentioned (this was not the only thing I didn't like about the pen, but was the least forgivable), and I got rid of the pen so quickly, I didn't really test the other properties of the ink. I replaced it, by the way, with an L-Tech 3.0 rollerball, which has a nice needlenose tip refill that I really liked... until I fell for fountain pens. When I got my current (and favorite) pen, I used the Pilot ink that came with it: Pilot Namiki Blue. I refilled it with Pilot Namiki Black ink. Both of those inks work really nicely in that pen - excellent flow, very reliable, decent color. I don't think I would have grown to like fountain pens nearly so much if that ink hadn't been such nice ink.
But then, out of curiosity and because some of my friends had other inks, I began to investigate the options. Many pen companies (Waterman, Levenger, Pilot, Pelikan, Sailor, etc. etc.) all make inks as well, and folks have their favorites (Pelikan Blue, for instance, is a classic, well-regarded ink). There are also companies that only make ink: Diamine, J. Herbin, Private Reserve, and such. If you focus on black ink, though, you will inevitably come across one name: Noodler.
Noodler is purely an ink company, 100% made in the USA, all archival-quality (i.e. pH-neutral) and focused on value: they fill their ink bottles up to the tippy top, use whatever the cheapest glass bottle they can find is (so the bottle shapes tend to change from time to time), and even use their own ink for all of their labels. They're quite popular among fountain-pen fans, and have a solid reputation for quality (can you tell I'm a fan?). When I started looking into them, I was bewildered by the breadth of colors they have, in particular, they have a bunch of different "black" inks, with almost no obvious explanation of why or what the differences are between them:
- Bulletproof Black (or simply "Black")
- Heart of Darkness
- Borealis Black
- X-Feather
- Old Manhattan
- Bad Black Moccasin
- Black Eel
- Dark Matter
- El Lawrence
- Bernanke Black
- Polar Black
- Blackerase
That's twelve different black inks! So, to help out the next guy looking at buying Noodler's ink, here's what I've gleaned - please correct me if I've gotten anything wrong.
h3. Noodler's Bulletproof Black
To my knowledge, this is the original Noodler ink. As you can read "here":http://noodlersink.com/noodlers-durable-ink-classification/ they use the term "Bulletproof" to describe the ink's immutability: it is water-proof, bleach-proof, UV-light-proof, etc. They use the term "Eternal" to describe the ink's permanence: it doesn't oxidize, and it doesn't fade in UV light. This ink is designed to react with the cellulose in the paper, much like the way people die clothing, and as such is extremely immutable. All of their ink is pH-neutral, which means (among other things) it's an archival-quality ink. This ink is also quite black, flows nicely, and raised eyebrows with how little it feathers, even on low-quality paper. It can seem to sit on top of the paper, rather than soak into it, which helps reduce the feathering. This ink is what, to my knowledge, Noodler built their reputation on. They even had an open challenge (for $1000) for a while, to see if anyone could find a way of erasing this ink from the page! (More on this in a moment.) As a result, the permanence and immutability of this ink is quite well-studied. It is sometimes regarded as THE benchmark for black inks, it is that consistent and that popular.
h3. "Noodler's Heart of Darkness":http://noodlersink.com/whats-new/heart-of-darkness/
As Noodler was making their name with their Black ink, some folks, inevitably, wanted it a bit darker. So, the brain behind Noodler set out to make an ink that was as dark black as he could possibly make it, while still being both Bulletproof (immutable) and Eternal (permanent). This is that ink: as black as could be engineered (at the time, anyway), and just as Bulletproof (immutable) and Eternal (permanent) as the original Black. Let's not kid around: this is VERY black ink. It is a relatively quick-drying ink, and penetrates the paper more than the standard Bulletproof Black - which means it works better on shinier paper than Bulletproof Black, but also means it can feather or bleed more if you lay down a lot of ink. The feathering depends heavily on the paper and the wetness - in my experience (with an extra-fine nib), it feathers much less than the Pilot Namiki Black, and some find it feathers similarly to Bulletproof Black, but your experience will depend heavily on the nib/paper combination. It also flows quite well, which is important in extra-fine nibs. It's my current favorite black ink. Its permanence and immutability isn't as well-studied as Bulletproof Black, but is believed to be the same.
h3. "Noodler's Borealis Black":http://noodlersink.com/whats-new/borealis-black/
This ink is the absolute blackest Noodler could make. It's fashioned after traditional 1950's inks that are EXTREMELY black. According to Goulet Pens, this was made to emulate an ink by the Aurora ink company, "Aurora Black". In any event, it's so black that multiple layers of the ink are just as black as a single layer. However, sacrifices had to be made to achieve this level of light-absorption (i.e. blackness). This ink is somewhat water-resistant, but is neither Bulletproof nor Eternal. It's a "wet" writing ink, and takes a little longer to dry (so could be a bit "smeary" in practice). It also feathers more than the basic Bulletproof Black. However, if you want as absolutely black as possible, this is the ink you want.
h3. "Noodler's X-Feather Black":http://noodlersink.com/whats-new/x-feather/
This ink is specifically designed to feather as little as possible, even on very absorptive paper. Really, it's Noodler showing off their mastery of the chemical properties of ink, because even their Bulletproof Black doesn't feather much, and this feathers even less! It is more viscous than their other black inks, which makes it flow less well in particularly fine nibs (depends on the pen), and so is more popular for use with dip pens. It also dries quite slowly, comparatively speaking. It is fairly dark black - about the same as Bulletproof Black - and is also both Bulletproof and Eternal. However, because of the anti-feathering properties, this ink can be laid down quick thickly (or in multiple layers) to become VERY VERY black without becoming messy. As a result, this ink is particularly popular with calligraphers, who typically use pens that are quite wet (i.e. a very broad nib). If you don't lay it down thickly, it is merely a very solid black.
h3. Noodler's Old Manhattan
This is an ink that Noodler doesn't sell themselves - it's exclusively made for a website called "The Fountain Pen Hospital":http://www.fountainpenhospital.com/. This ink is reputed to be even blacker than Heart of Darkness, but not quite as black as Borealis Black (apparently being super super black is somewhat at odds with being bulletproof). It is supposed to be both Bulletproof and Eternal, however it likely makes a tradeoff in terms of its other properties (flow, feathering, etc.) to reach that additional level of blackness. Some have noted that this ink has sediment in it, and you need to shake it up before filling your pens. This sediment is bonded with the paper when it dries, but also bonds with your pen and may need a proper cleaning (with a cleaning solution, not just rinsing with water) to get it out again.
h3. "Noodler's Bad Black Moccasin":http://noodlersink.com/whats-new/wardens-ink-series-bad-black-moccasin-bad-belted-kingfisher/
I mentioned that there was a competition to try to erase the Bulletproof Black ink. A Yale scholar, Nicholas Masluk, actually found a way to do it, using carefully controlled lasers to blast it off of the cellulose in the paper (I believe this technique depends on knowing the precise makeup of the ink, so you use the exactly right wavelength of laser). This potential problem, naturally, needed a response, and this ink is that response. It is even MORE permanent than the standard Bulletproof Black, being impervious to lasers as well, and is essentially the same color. It dries more slowly than the standard Bulletproof Black, but flows faster. As a result, it feathers a bit more than Bulletproof Black. Actually, Noodler has created an entire line of laser-proof inks, all with a name beginning with "Bad" (e.g. Bad Belted Kingfisher, which is a green ink). Noodler calls this line of inks the "Warden" series. These inks are intended to be state-of-the-art in anti-forgery technology, so, among other things, they're purposefully mixed with a slightly different recipe in every single bottle, to make it that much harder to forge and that much harder to remove (because the forger can't know exactly what's in it ahead of time).
h3. Noodler's Black Eel
This ink is in Noodler's "eel" line, also sometimes referred to as *Black American Eel*, and is identical to Noodler's Black with lubrication added in. This lubrication is intended to lubricate the piston in piston-refill pens, which would otherwise need to be dismantled and lubed on occasion. Many cartridge converters also use a piston design, and it's good for that too. It is considered Bulletproof and Eternal, takes longer to dry as a result of the lube, but is otherwise identical to Bulletproof Black. The lube also affects the writing performance: it feels smoother going on the page. As I understand it, longer dry time doesn't seem to increase the feathering of this ink relative to the Bulletproof Black, which is somewhat interesting.
h3. "Noodler's Dark Matter":http://noodlersink.com/whats-new/dark-matter/
This is an ink that was formulated to replicate a special ink that was used by scientists at Los Alamos, New Mexico on all of their government documents during the Manhattan Project. The man behind Noodler was provided a bottle of the original ink and asked to replicate it, which he did, although he made the ink pH-neutral, and out of modern ingredients. It's not really a *black* ink; more of a very dark grey (dark enough to be mistaken for black in thin lines). It's also not considered Bulletproof or Eternal (it's replicating a very old ink!), but is is water-resistant. In case you're curious, part of the reason there was a special ink for Los Alamos during the Manhattan Project was so that the ink could be identified, authenticated, and traced if it showed up in random documents somewhere it shouldn't have been (e.g. in the possession of Russian spies).
h3. Noodler's El Lawrence
This is another ink that Noodler doesn't sell themselves - it's exclusively made for a company called "Goulet Pens":http://www.gouletpens.com. This ink has the color more of used motor oil: not quite black, a little bit green, a little bit brown. It is also a unique color because it fluoresces under UV light. It is considered Bulletproof and Eternal, but tends to stick to the pen a bit more, and so requires a bit more cleaning of the pen, especially when you change inks. I don't know much about the flow or feathering of this ink.
h3. "Noodler's Bernanke Black":http://noodlersink.com/general/new-bernanke-inks/
This is a fast-drying ink (the label makes a point about needing to print money especially quickly without smearing). It achieves this by absorbing into the paper quickly, which means it's quite "wet", has really excellent flow, and consequently, that means it feathers quite readily, especially when laid on thickly. The color is about the same as Bulletproof Black.
h3. Noodler's Polar Black
The Noodler "Polar" line of inks is intended to work in extremely cold temperatures (i.e. less water content, and the ink won't freeze unless it gets *extremely* cold). All of the "Polar" inks are based on "Eel" inks (lubricated inks), but with anti-freeze added as well as lube. Accordingly, this ink is based on Black Eel, which was based on Bulletproof Black. The anti-freeze thins the ink a bit, which means that, partially because of the lube-induced longer drying time, this ink feathers a bit, similar to Bernanke Black. It is considered Bulletproof and Eternal, and is the same color as the Bulletproof Black.
h3. Noodler's Blackerase
This is part of Noodler's "erase" line of inks, which is intended to work on wet-erase whiteboards. It was originally done as an experiment, but has been popular enough to stick around. Essentially, it goes on, dries, and can be completely removed with a wet rag. It is a relatively "wet" ink, in that it penetrates well (it's intended for being used in a felt-tip marker), and as a result feathers a fair bit. It is neither Bulletproof nor Eternal (obviously), but it is very black. It's not recommended for use on paper, though of course you can.
If you poke around the internet, you will likely find people who have different impressions of the properties (flow, feathering, blackness, etc.) of these inks. I am sure that their experiences are accurate; the thing to keep in mind is this: everyone's experience will be somewhat different because of variations in pen and paper. Additionally, Noodler's ink is all handcrafted, so there can be slight variations in the effective properties of each ink from batch to batch (of course, they try to minimize these differences, except in the Warden series, but it happens). What I'm trying to explain here is *why* these inks were made, so you can understand what the purpose of each is, and what their key properties are.
That said, if you're looking for a solid, well-behaved, very black, very permanent ink, Bulletproof Black is an excellent starting point.
As I see it (this is just my opinion), the mainstays of Noodler's black ink offerings are their Bulletproof Black, the Heart of Darkness, and X-Feather. They're all Bulletproof, they're all Eternal, they're all popular and generally well-behaved inks. Heart of Darkness flows faster and dries quickly and so is good for finer pens, drier pens or lefties, while X-Feather is good for very wet, wide pens, and Bulletproof Black is halfway in between - a good "all around" black ink. All three are very black; Heart of Darkness was intended to be the blackest, but the darkness you achieve depends on your paper and how much ink you lay down. They have some "special" inks that are intended to re-create special inks and very specific hues, like El Lawrence and Dark Matter. Then there are the inks that are designed to have specific unique properties - Bernanke Black dries extra fast, Bad Black Moccasin is even more Bulletproof than the rest (more than most people would realistically need), Black American Eel is specially lubricated (for those that want a smoother ink or that have problems with older, finicky piston pens), Polar Black won't freeze (for those that need to operate in those conditions), Borealis Black is for pitch-black extremophiles, and Blackerase is for wet-erase markers. To achieve those special properties they have made sacrifices in the other properties of the ink (e.g. usability, permanence, and/or immutability), but that's just the price you pay for those special properties. In practice, however, ALL of these inks are excellent, and with the exception of Dark Matter, are all very black inks. If they work well for you, in your pen and on the paper you use, there's no reason not to use them. I wouldn't necessarily recommend, for example, that someone using Bad Black Moccasin start using some other ink unless it was behaving in a way they didn't like.
Personally, I use an extra-fine nib and I tend to write very quickly, so things like flow are very important considerations, while feathering is less likely (simply because my lines use so little ink). Because of that, and because I was attracted to the ultra-black intent, I started with the Heart of Darkness and I've been very happy - it works very well for me. It flows very well, dries quickly, and doesn't bleed much at all, even on absorptive paper. For example, I use Levenger note cards for things like Todo lists. In my experience, Pilot Namiki Black feathered pretty badly on those cards: my nice thin lines doubled in width on those cards. Heart of Darkness, on the other hand, hardly feathers at all there. And on most paper Heart of Darkness looks a touch blacker than the Pilot ink, which I like (not that I was upset with the blackness of the Pilot ink!). However, some people find that Heart of Darkness bleeds too much for them - these people are likely using wider-nibs or wetter pens than I am, but maybe it's different paper, or maybe they write more slowly than I do, or maybe they just have a super-low tolerance for feathering. In any event, if that is you, I would suggest that you go try Bulletproof Black or even X-Feather, because they have a reputation for not feathering. You could try others, like Bad Black Moccasin or Black American Eel, but they would likely feather just as much (maybe more) and those inks make trade-offs in other ways that might end up being just as annoying to work with. On the other hand, if you are interested in particular ink qualities, for instance if you're particularly concerned about anti-forgery and want the extra protection that Bad Black Moccasin provides, then you really don't have much of a choice: there's only one black ink in Noodler's arsenal that provides that property.
If what you're after is simply the blackest ink Noodler makes, get Borealis Black. If you want the absolute blackest Bulletproof ink they make, get Old Manhattan. To get that black, though, you have to sacrifice something, such as permanence, immutability, feathering, drying time, or what-have-you.
It's worth noting, in closing, that there are other inks out there that provide some of the properties that Noodler's has made famous. For instance, Private Reserve Invincible Black is supposed to be "Bulletproof" as well, using a similar cellulose-reaction technology that Noodler's Bulletproof inks do, and some people like various things about it better (e.g. its a little bit more lubricated, and so a little bit smoother - along the lines of Black American Eel - but the exact shade of black is likely different). Noodler's is far from the only game in town. I'm not advocating Noodler's exclusively, just trying to explain what I've learned about their multitude of black inks.
Exception Handlingtag:www.memoryhole.net,2015:/kyle//6.13932015-11-04T15:58:45Z2015-11-04T18:04:20ZIf you google “Exceptions Considered Harmful”, you’ll find several folks who have a bone to pick with exceptions. The best arguments (in my opinion) are these: Exception handling introduces a hidden, “out-of-band” control-flow possibility at essentially every line of code....Kyle Wheeler
a is leaked. The alternative, proposed by exception advocates and C++ experts (namely, "Fabrizio Oddone":http://www.pianofab.com/rant/en/except.html) is:
bc. Foo::Foo() :
a(NULL),
b(NULL)
{
std::auto_ptr exceptionSafeBar(new Bar);
std::auto_ptr exceptionSafeBaz(new Baz);
a = exceptionSafeBar.release();
b = exceptionSafeBaz.release();
}
p. If the @Baz@ allocation fails, the @auto_ptr@ destructor WILL be called (you knew that, right? it's because while @Bar@ didn't go out of scope, @exceptionSafeBar@ _does_ go out of scope as the result of an exception), which will call the @Bar@ destructor. Then you can "release" those pointers, which is a non-throwing operation (can you tell by looking at them that they cannot throw an exception?). Nice and clean, right?
An alternative proposed by C programmers would be this:
bc. struct Foo *f = malloc(sizeof(struct Foo));
if (f == NULL) {
return NULL;
}
f.a = malloc(sizeof(struct Bar));
f.b = malloc(sizeof(struct Baz));
if (!f.a || !f.b) {
if (f.a) free(f.a);
if (f.b) free(f.b);
free(f);
return NULL;
}
p. The one with exceptions is a lot fewer lines of code, but which one do you find easier to understand? Which would be easier to debug? Which would you have thought of?
But that’s just memory allocation – and the proposed exception-friendly solution is, essentially, a form of garbage collection. And what about complex data structures, or some other thing where a potential failure can come part-way through a change? The basic C++ answer to this is the same: hide the cleanup in destructors of custom error-handling classes that are created on a per-operation basis. It’s like a dead-man switch: successful execution has to disarm the bomb (er, I mean, "disarm the clean-up variables") whose purpose is to destroy everything in the case of unexpected death (er, "an exception"). That’s what the call to @release()@ did, among other things. Now ask yourself: what happens if a destructor encounters an exception? How familiar are you with the rules governing destructor ordering in exceptional cases and how to work around it when necessary? How much implicit behavior do you want to rely on for your error-handling?
The strength of exception handling is also its greatest weakness: the fact that it's hidden. The big benefit is that your "happy path" through the code is clean and obvious. The big downside is that the error paths (both the sources of errors and the handling of errors) are largely invisible. (And that's in addition to the challenges when doing threaded code.)]]>
VMWare Workstation autostart vmware-usertag:www.memoryhole.net,2015:/kyle//6.13922015-02-26T22:51:25Z2015-02-26T23:03:48ZI didn’t find this solution anywhere on the internet, so I figured I’d post it here, even if only for the next time I need it. I have a copy of VMWare Workstation 10 that I use on a Windows...Kyle Wheeler
I didn't find this solution anywhere on the internet, so I figured I'd post it here, even if only for the next time I need it.
I have a copy of VMWare Workstation 10 that I use on a Windows laptop at work to run Ubuntu. In general, it works quite well - it does certain things faster than VirtualBox, and has a few key features I need for work. However, every time I have to re-install the vmware additions, it forgets how to automatically run the @vmware-user@ binary at login. This program, for those who are unfamiliar, turns on a bunch of the VMWare extras, such as making sure that it knows when to resize the screen (e.g. when I maximize Linux to full-screen), among other things.
Now, sure, this program can be run by hand (as root), but it's the principle of the thing.
The trick, as it turns out, is permissions. This program needs to be run by root, so it's actually a link to a setuid binary in @/usr/lib/vmware-tools/bin64@. All the files in @/usr/lib/vmware-tools/@ are installed, for whatever reason, with the wrong permissions. The following set of commands will fix it:
bc. sudo find /usr/lib/vmware-tools/ -type d -exec chmod o+rx {} \;
sudo find /usr/lib/vmware-tools/ -type f -perm -g+rx ! -perm -o+rx -exec chmod o+rx {} \;
sudo find /usr/lib/vmware-tools/ -type f -perm -g+r ! -perm -o+r -exec chmod o+r {} \;
But that's not enough! There's _another_ directory that's been installed improperly: @/etc/vmware-tools/@! I'm less confident about blanket making the contents of this directory readable by ordinary users, but the following two commands seemed to be enough to make things work:
bc. sudo chmod go+r /etc/vmware-tools/vmware-tools*
sudo chmod go+r /etc/vmware-tools/vmware-user.*
Hopefully, that helps someone other than just me.
Automake 1.13's Parallel Harnesstag:www.memoryhole.net,2014:/kyle//6.13912014-01-20T18:30:41Z2015-02-26T22:54:46ZWhen GNU released Automake 1.13, they made some interesting (and unfortunate) decisions about their test harness. Automake has (and has had since version 1.11, back in May 2009) two different test harnesses: a serial one and a parallel one. The...Kyle Wheeler
When GNU released Automake 1.13, they made some interesting (and unfortunate) decisions about their test harness.
Automake has (and has had since version 1.11, back in May 2009) two different test harnesses: a serial one and a parallel one. The serial one is simpler, but the parallel one has a lot of helpful features. (In this case, parallel means running multiple tests at the same time, not that the harness is good for tests that happen to themselves be parallel.) Importantly, these two harnesses are mutually incompatible (e.g. the use of TEST_ENVIRONMENT is often necessary in the serial harness and will seriously break the parallel harness). The default test harness, of course, has always been the serial one for lots of very good reasons, including backwards compatibility. However, starting in automake 1.13 (which is now the standard on up-to-date Ubuntu, and will become common), whether it's a good idea or a bad idea, the default test harness behavior is now the parallel one. There is a way to specify that you want the original behavior (the "serial-tests" option to automake), however, that option was only added to automake 1.12 (released in April 2012), and since automake aborts when it sees an unsupported option, that path doesn't really provide much backward compatibility.
Now, it's clear that the automake authors think the parallel test harness is the way of the future, and is probably what people should be using for new projects. However, this has impact on backwards compatibility. For instance, the version of automake that comes with RHEL 5.8 (which still supported and relatively common) is 1.9.6 - which was released waaay back in July 2005!
One option for dealing with the latest wrinkle in automake is to adopt the parallel test harness, make 1.11 the required minimum version, and simply abandon support for developing on older systems. That may or may not be a viable option. Another option is to support both harnesses somehow, e.g. by providing two different Makefile.am files (or more, depending on how complex your test folder tree is) and swapping between them via some sort of script (e.g. autogen.sh, which many people use). This, however, is a maintainence nightmare. And the third option is to stick with the serial test harness, attempt detection of the autoconf version in the configure script and conditionally define the "serial-tests" option only when necessary. This will maintain compatibility with old versions, but is somewhat fragile.
An example of this latter is, assuming your AM_INIT_AUTOMAKE line looks like this:
bc. AM_INIT_AUTOMAKE([1.9 no-define foreign])
...to make it look like this:
bc. AM_INIT_AUTOMAKE([1.9 no-define foreign]
m4_ifdef([AM_EXTRA_RECURSIVE_TARGETS], [serial-tests]))
Building GCC 4.8 on RHEL5.8tag:www.memoryhole.net,2013:/kyle//6.13902013-04-29T21:42:42Z2013-04-29T22:14:15ZI didn’t find this solution anywhere else on the internet, so I figured I’d post it here… When building GCC 4.8.0 on an up-to-date RHEL 5.8 system, it died, complaining that in the file libstdc++-v3/libsupc++/unwind-cxx.h, there’s a macro (PROBE2) that...Kyle Wheeler
I didn't find this solution anywhere else on the internet, so I figured I'd post it here...
When building GCC 4.8.0 on an up-to-date RHEL 5.8 system, it died, complaining that in the file @libstdc++-v3/libsupc++/unwind-cxx.h@, there's a macro (@PROBE2@) that cannot be expanded. It complains about things like this:
bc. In file included from ../../../../libstdc++-v3/libsupc++/unwind-cxx.h:41:0,
from ../../../../libstdc++-v3/libsupc++/eh_throw.cc:26:
../../../../libstdc++-v3/libsupc++/eh_throw.cc: In function ‘void __cxxabiv1::__cxa_throw(void*, std::type_info*, void (*)(void*))’:
../../../../libstdc++-v3/libsupc++/unwind-cxx.h:45:34: error: unable to find string literal operator ‘operator"" _SDT_S’
#define PROBE2(name, arg1, arg2) STAP_PROBE2 (libstdcxx, name, arg1, arg2)
^
The most helpful answer I could find was "this one":http://gcc.gnu.org/ml/gcc-help/2013-04/msg00012.html which didn't actually HELP so much as point a good finger at the culprit: SystemTap. Never heard of it? Neither had I. Their headers are apparently not compatible with C++11, and need to be updated. Don't have root? Heh, have fun with that.
Of course, telling GCC to ignore SystemTap is not possible, that I can tell, unless SystemTap happened to be installed in an unusual place. So, instead, we have to resort to convincing GCC that it's not installed. Unfortunately, that can get tricky. What I ended up having to do was edit @x86_64-unknown-linux-gnu/libstdc++-v3/config.h@ and comment-out the line that says
bc. #define HAVE_SYS_SDT_H 1
...so that it reads this instead:
bc. /*#define HAVE_SYS_SDT_H 1*/
It's not a *good* solution, of course, because I'm coming along behind the configuration logic and changing some of the answers without ensuring that there weren't conditional decisions made on that basis, AND since GCC builds itself several times to bootstrap into a clean, optimized product, you have to make that edit multiple (three) times. Basically, this is a horrible horrible hack around the problem. BUT, this works, is simple, and gets me a working compiler.
"Trump" Puts it into Simple Terms?tag:www.memoryhole.net,2012:/kyle//6.13892012-07-02T17:14:01Z2012-07-02T17:52:24ZA friend recently sent me this pithy sentence, purportedly penned by Donald Trump: ‘Let me get this straight … We’re going to be “gifted” with a health care plan we are forced to purchase and fined if we don’t, Which,...Kyle Wheeler
A friend recently sent me this pithy sentence, purportedly penned by Donald Trump:
bq.. 'Let me get this straight ...
We're going to be "gifted" with a health care plan we are forced to purchase and fined if we don't, Which, purportedly covers at least ten million more people, without adding a single new doctor, but provides for 16,000 new IRS agents, written by a committee whose chairman says he doesn't understand it, passed by a Congress that didn't read it (but exempted themselves from it), and signed by a Dumbo President who smokes, with funding administered by a treasury chief who didn't pay his taxes, for which we'll be taxed for four years before any benefits take effect, by a congress which has already bankrupted Social Security and Medicare, all to be overseen by an obese surgeon general and financed by a country that's broke!!
What the hell could possibly go wrong?'
p. Let's put aside the over-use of exclamation marks for the moment, as well as arguments about whether Trump actually said it (probably not, but whatever), and dismantle this piece of garbage.
You are "gifted" with a system of roads that you are forced to pay for. You are "gifted" with the protection of a military that you are forced to pay for. Let's not get bent out of shape about the things government requires us to pay for. It's called living in a civilization; get used to it.
As for covering more people… the expansion of Medicaid alone (expanding coverage to adults with up to 133% of poverty, aka $18,310/yr) is expected to cover as many as "17 million more Americans":http://www.wptv.com/dpp/money/consumer/affordable-care-act-answers-to-commonly-asked-questions, so I have no idea where "Trump's" number came from. Citations would be nice. As for those 16,000 new IRS agents? According to factcheck.org, "that's complete baloney":http://www.factcheck.org/2010/03/irs-expansion/. They describe the claim as "wildly inaccurate", stemming "from a partisan analysis based on guesswork and false assumptions, and compounded by outright misrepresentation."
The Affordable Care Act "came from the Senate Finance Committee":http://www.finance.senate.gov/issue/?id=32be19bd-491e-4192-812f-f65215c1ba65, starting way back in 2007. The chair of that committee is Senator Max Baucus, and I cannot find a quote from him on the internet anywhere suggesting that he does not understand it (though I admit, absence of proof is not proof of absence). There's plenty on there about him saying he didn't read every page of it, though. But that kind of inaccuracy does not make me more interested in believing the author, even if it could be Trump.
Indeed, whether Congress read it or not, they are "NOT exempt from it":http://www.forbes.com/sites/rickungar/2011/12/08/congress-exempted-from-obamacare/. In fact the law says "Members of Congress and congressional staff" will only be offered plans created by the law or offered through exchanges established by the law.
I'm not sure what exactly Obama's smoking habit has to do with anything, apparently it means he's a hypocrite somehow? I don't see it. But now we're into the personal attacks portion of Trump's little diatribe, which, whether true or not, mean nothing about the law and are just ad hominem attacks. Presumably he thinks that the personal finances of the "treasury chief" or personal habits of the Surgeon General are relevant to health care legislation. On top of that, Trump apparently can't be bothered to type "Secretary of the Treasury" - Britain is the country that uses "Chief" to refer to the head of the treasury (as in "Chief Secretary to the Treasury"); I guess Trump got confused.
As for Social Security being bankrupt, perhaps Trump should explain things to Forbes, who seems to think he has fallen afoul of a "common logical error":http://www.forbes.com/sites/johntharvey/2011/04/08/why-social-security-cannot-go-bankrupt/. By the same token, Medicare cannot go bankrupt either, but it's worth pointing out that Medicare is in much better financial shape "as a result of the Affordable Care Act":http://www.cbpp.org/cms/index.cfm?fa=view&id=3532. And is the United States broke? No. "As Bloomberg's David Lynch puts it":http://www.bloomberg.com/news/2011-03-07/bonds-show-why-boehner-saying-we-re-broke-is-figure-of-speech.html, all evidence, from the rate we pay on borrowing, to tax revenue as a percentage of the economy, to the trend in prices for insuring US debt, suggests quite the opposite.
But, you know, when you're bloviating, why let facts stand in your way? This is a nearly random collection of GOP talking points (many of which conflict with reality), rearranged into invective against the health care law (which has significant flaws, but none of which have been touched on here). Surprise! I suppose, if this is truly Trump's work, that I should have expected as much given that he still wants to argue with Hawaii about Obama's birth certificate.
A Use for Volatile in Multi-threaded Programmingtag:www.memoryhole.net,2012:/kyle//6.13882012-06-08T19:24:31Z2012-06-08T20:26:27ZAs anyone who has done serious shared-memory parallel coding knows, the volatile keyword in C (and C++) has a pretty bad rap. There is no shortage of people trying to explain why this is the case. Take for example this...Kyle Wheeler
As anyone who has done serious shared-memory parallel coding knows, the @volatile@ keyword in C (and C++) has a pretty bad rap. There is no shortage of people trying to explain why this is the case. Take for example this article from the original author of Intel Threaded Building Blocks (a man who, I assure you, knows what he's talking about): "Volatile: Almost Useless for Multithreaded Programming":http://software.intel.com/en-us/blogs/2007/11/30/volatile-almost-useless-for-multi-threaded-programming/. There are others out there who decry @volatile@, and their arguments are right to varying degrees. The heart of the issue is that first, @volatile@ is EXPLICITLY IGNORABLE in the C specification, and second, that it provides neither ordering guarantees nor atomicity. Let me say that again, because I've had this argument:
bq. *VOLATILE DOES NOT ESTABLISH ORDERING REQUIREMENTS*
bq. *VOLATILE DOES NOT PROVIDE ATOMICITY*
But I'm not here to talk about that; I want to talk about a place where I found it to be critical to correctness (as long as it isn't ignored by the compiler, in which case creating correct code is _painful_). Honestly, I was quite surprised about this, but it makes sense in retrospect.
I needed a double-precision floating point atomic increment. Most increments, of the @__sync_fetch_and_add()@ variety, operate exclusively on integers. So, here's my first implementation (just the x86_64 version, without the PGI bug workarounds):
bc. double qthread_dincr(double *operand, double incr)
{
union {
double d;
uint64_t i;
} oldval, newval, retval;
do {
oldval.d = *operand;
newval.d = oldval.d + incr;
__asm__ __volatile__ ("lock; cmpxchgq %1, (%2)"
: "=a" (retval.i)
: "r" (newval.i), "r" (operand),
"0" (oldval.i)
: "memory");
} while (retval.i != oldval.i);
return oldval.d;
}
Fairly straightforward, right? But this has a subtle race condition in it. The dereference of operand gets translated to the following assembly:
bc. movsd (%rcx), %xmm0
movd (%rcx), %rax
See the problem? In the assembly, it's actually dereferencing @operand@ TWICE; and under contention, that memory location could change values between those two instructions. Now, we might pause to ask: why is it doing that? We only told it to go to memory ONCE; why would it go to twice? Well, a certain amount of that is unfathomable. Memory accesses are usually slow, so you'd think the compiler would try to avoid them. But apparently sometimes it doesn't, and technically, dereferencing non-volatile memory multiple times is perfectly legal. The point is, this is what happened when compiling with basically every version of gcc 4.x right up through the latest gcc 4.7.1.
In any event, there are two basic ways to fix this problem. The first would be to code more things in assembly; either the entire loop or maybe just the dereference. That's not an appealing option because it requires me to pick which floating point unit to use (SSE versus 387 versus whatever fancy new stuff comes down the pike), and I'd rather let the compiler do that. The second way to fix it is to use @volatile@. If I change that dereference to this:
bc. oldval.d = *(volatile double *)operand;
Then the assembly it generates looks like this:
bc. movsd (%rcx), %xmm0
movd %xmm0, %rax
Problem solved! As long as the compiler doesn't ignore the volatile cast, at least...
So, for those who love copy-and-paste, here's the fixed function:
bc. double qthread_dincr(double *operand, double incr)
{
union {
double d;
uint64_t i;
} oldval, newval, retval;
do {
oldval.d = *(volatile double *)operand;
newval.d = oldval.d + incr;
__asm__ __volatile__ ("lock; cmpxchgq %1, (%2)"
: "=a" (retval.i)
: "r" (newval.i), "r" (operand),
"0" (oldval.i)
: "memory");
} while (retval.i != oldval.i);
return oldval.d;
}
(That function will not work in the PGI compiler, due to a compiler bug "I've talked about previously.":http://www.memoryhole.net/kyle/2010/06/pgi_compiler_bug.html)
Google Breaks its own DKIM Signaturestag:www.memoryhole.net,2011:/kyle//6.13862011-07-05T15:14:08Z2012-06-08T20:29:09ZSo, Google, vaunted tech company that it is, seems to be doing something rather unfortunate. One of my friends/users, who uses Gmail as a repository for his email, recently notified me that email sent to him from other Gmail accounts...Kyle WheelerA C Lock-Free Hash Table Implementationtag:www.memoryhole.net,2011:/kyle//6.13852011-06-03T05:26:56Z2011-06-03T07:06:27ZWell, I finally have one: a lock-free hash table implementation in C. The hash table I implemented is based on the work by Ori Shalev and Nir Shavit. Much of the code is similar to the example code they published...Kyle Wheeler
Well, I finally have one: a lock-free hash table implementation in C. The hash table I implemented is based on the work by Ori Shalev and Nir Shavit. Much of the code is similar to the example code they published in their paper "Split-Ordered Lists: Lock-Free Extensible Hash Tables":http://doi.acm.org/10.1145/872035.872049, back in 2006, but has been modified to support additional conveniences so it has a library-esque interface.
There is one problem unique to this algorithm: It doesn't compact well if you go from lots and lots of entries back down to just a few entries. Not that it compacts _horribly,_ but it doesn't shrink all the way back down; it keeps some meta-data around (what the algorithm calls "dummy keys"). On the other hand, it does quite well if you tend to have a rapidly rotating set of keys (even with random-ish key values) but with a ceiling on the number of them in the hash table at any one time: it uses a static set of dummy keys in that case. In an effort to reduce the impact of the compaction problem, I didn't implement the dynamic-array extension, so this implementation is performance-limited to around 2048 concurrent keys or so (though you can insert as many as you like, more than that will begin to get slow).
Of course, there are three problems with this algorithm (and implementation) that are pretty typical:
# It has the ABA problem. I think this can be solved (or at least mitigated) with generation counters, so I don't see it as a big issue.
# It has the blind-delete problem. This is a really tough one. I _think_ this can be solved, but my current best thinking on the matter ends up devolving to a highly-contended CAS operation, which is obviously sub-optimal.
# For simplicity, I didn't use memory pools for the nodes, so I end up doing malloc/free calls on insertion and delete. This isn't a correctness problem, but can become a performance bottleneck.
Anyway, without further ado, "here's the code":http://www.memoryhole.net/kyle/2011/06/02/lf_hash.c. You can download it and compile it yourself; it is self-contained C99 code with a trivial (single-threaded) driver in @main()@.
I've talked about concurrent hash tables before, "here":http://www.memoryhole.net/kyle/2007/10/intels_tbb_hash_is_severely_li.html and "here":http://www.memoryhole.net/kyle/2007/09/i_wish_i_had_a_cbased_lockfree.html, and those discussions may be interesting to anyone who found this code interesting and/or useful. Having re-read Chris Purcells' paper, I should point out that while he does have some very worthwhile code, his algorithm also has the compaction problem. He *does*, however, have a good summary of several designs, and of the drawbacks of each (primarily in the area of a need for garbage collection).
Gmail, DKIM, and DomainKeystag:www.memoryhole.net,2011:/kyle//6.13842011-01-10T20:04:21Z2012-06-08T20:31:36ZI recently spent a bunch of time trying to resolve some delivery problems we had with Gmail. Some of it was dealing with idiosyncratic issues associated with our mail system, and some of it, well, might benefit others. In our...Kyle Wheeler>
This reflects what would happen during the SMTP conversation with Gmail's servers: the double-wockas would be there as well, which is, officially, invalid SMTP syntax. The solution we're using now is relatively trivial and works well:
bc. SENDER=`formail -c -x Return-Path | head -n 1 | tr -d'<>'`
SENDMAILFLAGS="-f${SENDER}"
Let me re-iterate that, because it's worth being direct. Using Gmail's suggested solution caused messages to %{color:red}_*+DISAPPEAR+*_%. %{color:red}_*+IRRETRIEVABLY+*_%.
Granted, that was my fault for not testing it first. But still, come on Google. That's a BAD procmail recommendation.
There were a few more problems I had to deal with, relating to DomainKeys and DKIM, but these are someone idiosyncratic to our mail system (but it may be of interest for folks with a similar setup). Here I should explain that when you send from Gmail through another server via SMTP-AUTH, Gmail signs the message with its DK key, both with a DKIM and with a DomainKeys header. This is DESPITE the fact that the @Return-Path@ is for a non-gmail domain, but because the @Sender@ is a gmail.com address, this behavior is completely legitimate and within the specified behavior of DKIM.
The first problem I ran into was that, without a new @Return-Path@, the @dktest@ utility from DomainKeys would refuse to sign messages that had already been signed (in this case, by Gmail). Not only that, but it would refuse in a very bad way: instead of spitting out something that looks like a @DomainKey-Signature:@ header, it would spit out an error message. Thus, unless my script was careful about only appending things that start with @DomainKey-Signature:@ (which it wasn't), I would get message headers that looked like this:
bc. Message-Id: <4d275412.6502e70a.3bf6.0f6dSMTPIN_ADDED@mx.google.com>
do not sign email that already has a dksign unless Sender was found first
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gmail.com; h=mime-version
That's an excerpt, but you can see the problem. It spit an invalid header (the error) into the middle of my headers. This kind of thing made Gmail mad, and rightly so. It made me mad too. So mad, in fact, that I removed libdomainkeys from my toolchain completely. Yes, I could have added extra layers to my script to detect the problem, but that's beside the point: that kind of behavior by a tool like that is malicious.
The second problem I ran into is, essentially, an oversight on my part. My signing script chose a domain (correctly, I might add), and then handed the signing script a filename for the private key of that domain. HOWEVER, since I didn't explicitly tell it what domain the key was for, it attempted to discover the domain based on the other headers in the message (such as @Return-Path@ and @Sender@). This auto-discovery was only accurate for users like myself who don't use Gmail to relay mail through our server. But for messages from Gmail users, who relay via SMTP-AUTH, the script would detect that the mail's sender was a Gmail user (similar problems would arise for mailing lists, depending on their sender-rewriting behavior). So what it would do is assume that the key it had been handed was for that sender's domain (i.e. gmail.com), and would create an invalid signature. This, thankfully, was easy to fix: merely adding an explicit @--domain=$DOMAIN@ argument to feed to the signing script fixed the issue. But it was a weird one to track down! It's worth pointing out that the libdomainkeys @dktest@ utility does not provide a means of doing this.
Anyway, at long last, mail seems to be flowing to my Gmail users once again. Thank heaven!]]>
PGI Compiler Bugtag:www.memoryhole.net,2010:/kyle//6.13832010-06-16T17:01:35Z2012-06-08T20:31:53ZKyle Wheeler
I ran across another PGI compiler bug that bears noting because it was so annoying to track down. Here's the code:
bc. static inline uint64_t qthread_cas64(
volatile uint64_t *operand,
const uint64_t newval,
const uint64_t oldval)
{
uint64_t retval;
__asm__ __volatile__ ("lock; cmpxchg %1,(%2)"
: "=&a"(retval) /* store from RAX */
: "r"(newval),
"r"(operand),
"a"(oldval) /* load into RAX */
: "cc", "memory");
return retval;
}
Now, both GCC and the Intel compiler will produce code you would expect; something like this:
bc. mov 0xffffffffffffffe0(%rbp),%r12
mov 0xffffffffffffffe8(%rbp),%r13
mov 0xfffffffffffffff0(%rbp),%rax
lock cmpxchg %r12,0x0(%r13)
mov %rax,0xfffffffffffffff8(%rbp)
In essence, that's:
# copy the newval into register @%r12@ (almost any register is fine)
# copy the operand into register @%r13@ (almost any register is fine)
# copy the oldval into register @%rax@ (as I specified with "a")
# execute the ASM I wrote (the compare-and-swap)
# copy register @%rax@ to the variable I specified
Here's what PGI produces instead:
bc. mov 0xffffffffffffffe0(%rbp),%r12
mov 0xffffffffffffffe8(%rbp),%r13
mov 0xfffffffffffffff0(%rbp),%rax
lock cmpxchg %r12,0x0(%r13)
mov %eax,0xfffffffffffffff8(%rbp)
You notice the problem? That last step became @%eax@, so only the lower 32-bits of my 64-bit CAS get returned!
The workaround is to do something stupid: be more explicit. Like so:
bc. static inline uint64_t qthread_cas64(
volatile uint64_t *operand,
const uint64_t newval,
const uint64_t oldval)
{
uint64_t retval;
__asm__ __volatile__ ("lock; cmpxchg %1,(%2)\n\t"
"mov %%rax,(%0)"
:
: "r"(&retval) /* store from RAX */
"r"(newval),
"r"(operand),
"a"(oldval) /* load into RAX */
: "cc", "memory");
return retval;
}
This is stupid because it requires an extra register; it becomes this:
bc. mov 0xfffffffffffffff8(%rbp),%rbx
mov 0xffffffffffffffe0(%rbp),%r12
mov 0xffffffffffffffe8(%rbp),%r13
mov 0xfffffffffffffff0(%rbp),%rax
lock cmpxchg %r12,0x0(%r13)
mov %rax,(%rbx)
Obviously, not a killer (since it can be worked around), but annoying nevertheless.
A similar error happens in this code:
bc. uint64_t retval;
__asm__ __volatile__ ("lock xaddq %0, (%1)"
:"+r" (retval)
:"r" (operand)
:"memory");
It would appear that PGI completely ignores the bitwidth of output data!
qsort_rtag:www.memoryhole.net,2009:/kyle//6.13822009-11-13T22:41:22Z2012-06-08T20:32:06ZOnce upon a time, in 2002, the BSD folks had this genius plan: make the standard C qsort() function safe to use in reentrant code by creating qsort_r() and adding an argument (a pointer to pass to the comparison function)....Kyle WheelerMore Compiler Complaints: PGI Editiontag:www.memoryhole.net,2009:/kyle//6.13762009-06-10T21:39:43Z2012-06-08T20:32:15ZContinuing my series of pointless complaining about compiler behavior (see here and here for the previous entries), I recently downloaded a trial version of PGI’s compiler to put in my Linux virtual machine to see how that does compiling qthreads....Kyle Wheeler
_s at the end! Apparently PGI is okay with this:
bc. struct qt##initials##_s arg = { array, checkfeb }; \
::sigh:: Stupid, stupid compiler. At least it can be worked around.
h3. Thwarting The Debugger
PGI also bad at handling static inline functions in headers. How bad? Well, first of all, the DWARF2 symbols it generates (the default) are incorrect. It gets the line-numbers right but the file name wrong. For example, if I have an inline function in @qthread_atomics.h@ on line 75, and include that header in @qt_mpool.c@, and then use that function on line 302, the DWARF2 symbols generated will claim that the function is on line 75 of @qt_mpool.c@ (which isn't even correct if we assume that it's generating DWARF2 symbols based on the pre-processed source! and besides which, all the other line numbers are from non-pre-processed source). You CAN tell it to generate DWARF1 or DWARF3 symbols, but then it simply leaves out the line numbers and file names completely. Handy, no?
h3. Everyone Else is Doing It...
Here's another bug in PGI... though I suppose it's my fault for outsmarting myself. So, once upon a time, I (think I) found that some compilers require assembly memory references to be within parentheses, while others require them to be within brackets. Unfortunately I didn't write down which ones did what, so I don't remember if I was merely being over-cautious in my code, or if it really was a compatibility problem. Nevertheless, I frequently do things like this:
bc. atomic_incr(volatile uint32_t *op, const int incr) {
uint32_t retval = incr;
__asm__ __volatile__ ("lock; xaddl %0, %1"
:"=r"(retval)
:"m"(*op), "0"(retval)
:"memory");
return retval;
}
Note that weird @"m"(*op)@ construction? That was my way of ensuring that the right memory reference syntax was automatically used, no matter what the compiler thought it was. So, what does PGI do in this instance? It actually performs the dereference! In other words, it behaves as if I had written:
bc. atomic_incr(volatile uint32_t *op, const int incr) {
uint32_t retval = incr;
__asm__ __volatile__ ("lock; xaddl %0, (%1)"
:"=r"(retval)
:"r"(*op), "0"(retval)
:"memory");
return retval;
}
when what I really wanted was:
bc. atomic_incr(volatile uint32_t *op, const int incr) {
uint32_t retval = incr;
__asm__ __volatile__ ("lock; xaddl %0, (%1)"
:"=r"(retval)
:"r"(op), "0"(retval)
:"memory");
return retval;
}
See the difference? <sigh> Again, it's not hard to fix so that PGI does the right thing. And maybe I was being too clever in the first place. But dagnabit, my trick should work! And, more pointedly, it DOES work on other compilers (gcc and icc at the bare minimum, and I've tested similar things with xlc).]]>