All posts by Dakota

FireWire Quibble

I have a personal quibble: FireWire may be a dead product, but there are a lot of legacy devices out there (mostly in the audio world). The current-generation Thunderbolt–FireWire adapter is completely inadequate for these devices, for two reasons: 1) they’re an end-of-line device, meaning they don’t daisy chain, which makes them difficult to use with devices that have few TB ports and 2) they are limited by TB power delivery maximums to only 10W, which many FireWire legacy devices easily exceed when operating on bus power. As an example, I have a not-that-old FireWire audio interface that I’d like to run off bus power from my laptop, on the go. It draws 7.5W idle, but spikes over 10W during startup (charging capacitors, I’m sure). I can’t use it with the TB bus adapter, I need either DC power (dumb) or a second adapter (since like good FW devices this has two ports for daisy chaining). The DC power port went out a while back, so now I use an Original iPod FireWire charger on the second port to deliver enough power.

It would be nice if anyone offered a powered FireWire adapter that could deliver a lot of wattage for legacy devices.

InSecurity: Panera Bread Co.

This is the first installment of a new segment titled InSecurity, covering consumer-relevant business and government security practices with an emphasis on their failures.


Each new week, it seems, brings a new corporate or government data breach or operational security failure to out awareness. This week is no exception. The failure this time, however, is particularly egregious: ever the course of eight months, Panera Bread Co. knowingly failed to protect sensitive customer data from unauthorized access. This data included, according to security researcher Dylan Houlihan who originally discovered the vulnerability, at least one web API endpoint which revealed “the full name, home address, email address, food/dietary preferences, username, phone number, birthday and last four digits of a saved credit card to be accessed in bulk for any user that had ever signed up for an account“. Equivalent vulnerabilities were eventually discovered across multiple Panera web properties.

Houlihan first reported the issue in August of 2017, reaching out to Panera’s Information Security Director, none other than Mike Gustavison. No stranger to “managing” wildly insecure corporate systems, Gustavison worked as “ISO – Sr. Director of Security Operations” at Equifax from 2009–2013. After eight months of repeated attempts to convince Panera of the severity of their security hole, Houlihan reached out to security industry expert Brain Krebs, whose influence was able to extract mainstream media coverage and a formal PR statement from Panera. Incredibly, and despite public statements to the contrary, Panera failed to secure the vulnerable API endpoints.

For a full explanation of the vulnerability and a timeline of events, reference the primary link.

On the Heritability of Intelligence

In their 2013 article for Current Directions in Psychological Science, Tucker-Drob, Briley, and Harden propose a transactional model for the heritability of cognitive ability. The basis for this model is the well-documented biological phenomenon, the basis of phenotypic variation, the regulation of gene expression by environmental stimuli. The authors apply this finding to the characteristic of cognitive ability and support it using evidence from twin studies. They propose that, through self-selection of stimulating environments and experiences, individuals with inherited high cognitive ability cause the expression of more cognitive ability genes, which in turn feeds the selection of more stimulating environments. However, beyond this straightforward and reasonably supported model they also provide a guiding interpretation which places heavy emphasis on the significance of genetic heritability of high cognitive ability. They ignore shortcomings in the data and make claims which are both unnecessary and unsupported by the evidence. The most significant of these are presented, with rebuttal.

Having claimed that cognitive ability is undeniably a heritable trait (“no longer a question of serious scientific debate”) (Tucker-Drob, Briley, and Harden, 2013; 1), the authors propose that this trait is subject to the same regulatory forces of expression as affect everything from the production of digestive enzymes to the melanin content of the skin. Although undoubtedly a simplification, this claim seems to hold true. The evidence brought to support the heritability, on some level, of cognitive ability is reasonably robust. In fact, it follows naturally from the fundamental theoretical basis underpinning the whole diversity of individual characteristics: the regulation of gene expression. As a trait which varies between individuals, changes over the lifetime, and is subject to heritable factors, it is reasonable (if not inevitable) that cognitive ability should have some form of expression regulation mechanism in the genome. This proposed mechanism is sound and well founded. The authors then propose that heritability of cognitive ability is environmentally influenced, and further that high cognitive ability is more heritable than low cognitive ability.

However, the shortcomings of their specific model begin to show as early as the first example. The authors discuss the average educational attainment of Norwegians, which increased during the 20th century due to social and regulatory changes favouring education, in the context of measured heritability of educational attainment increasing during roughly the same period. The educational attainment data reports numbers for 1960 vs. 2000, while data on the heritability thereof reports “before 1940” and after, while itself being published in 1985. Ignoring the obvious flaw of comparing temporally un-matched samples of population data, the example provides no evidence of what the authors claim: educational attainment is not a measure of cognitive ability, but of the strength of social norms and reach of educational programs. Given Norway’s system of public education (10 years compulsory, 3 years ‘optional’ in name only, 3–8 years optional university level), the c. 2000 attainment of 11.86 years average demonstrates that 91% of individuals complete the first 13 years. Completion of compulsory public education is not a matter of individual aptitude or achievement, but of social norms and education system policy. Nine in ten completion of compulsory education is not indicative of high cognitive ability, and the data on heritability of cognitive ability are too narrow to offer a comparison.

The next section of the piece, which introduces the underlying framework of “gene-environment correlation” before explaining the authors’ proposed transactional model of cognitive ability specifically, serves as evidence against the very application which the authors make. They put it simply: “a broad array of presumably ‘environmental’ experiences—such as negative life events, relationships with parents, and experiences with peers—are themselves heritable” (Tucker-Drob, Briley, and Harden, 2013; 2). Clearly a broad range of experiences and qualities can be inherited. While the authors propose this to support the heritability of cognitive ability, it just as easily supports the heritability of academic inclination, curiosity, and even academic performance, factors which are often measured as indicators of cognitive ability. In fact, these very listed examples of heritable experiences are known to influence cognition themselves, as are other factors which fit the same category, such as parent attachment (Fryers & Brugha, 2013; 9). This gets into the base of the authors proposed model: that gene-environment correlation acts in concert with gene expression regulation to promote the achievement of cognitive ability potential in individuals with a genetic disposition for it. In particular, the authors posit that the natural outcome of this process is as observed: individuals with high cognitive ability are likely to have inherited it. In fact their evidence shows that, rather than cognitive ability itself being heritable, the indicators often measured for cognitive ability are heritable as are many influences on the development of cognitive ability.

Further complicating the authors’ model is their poor handling of shortcomings in their base evidence, twin studies. Due to restrictions on data retention and access, international adoption, research funding, and scope, the majority of twin studies showing heritability of cognitive ability sample from within the same country, with the same social norms, educational policy, and even similar socioeconomic context. The authors, however, insist that socioeconomic status is a predictor of both the heritability and level of cognitive ability. This is faulty: if socioeconomic status predicts cognitive ability and socioeconomic status is itself significantly heritable, then it will appear that socioeconomic status influences heritability of cognitive ability. This is demonstrated in the cited data, which break down outside of the US. In particular in social democracies such as Sweden (and Norway) where, the authors admit, strong social programs even the playing field and give all individuals of diverse backgrounds access to the same pool of potential environmental stimuli to select. In these states, it seems that the socioeconomic factors typically having a deleterious effect on  educational access and achievement (and thus, cognitive ability) are substantially reduced. As a result, the heritability of cognitive ability is low. This indicates that the authors’ model of positive experience selection for cognitive ability is fundamentally flawed: a socioeconomically- and gene-environment-linked deficit model better explains inheritance of cognitive ability. Instead of cognitive ability itself being inherited, it is a positive environment, with access to stimulating and diverse education and other experiences (among many factors), which is inherited. It is the absence of this environment and various opportunities of experience which reduces an individual’s achieved cognitive ability. This is clearly shown in the countries where their model does not hold, which have strong social programs.

The trouble with the authors’ model is not that it is based on a flawed mechanism, but that its predictions do not hold. There is no reason that high heritability of cognitive ability should correlate with high cognitive ability itself, absent the other factors. Their model provides no explanation for this claim, yet it is a clear component of their position. The authors themselves have set up evidence and arguments which can be used to make a completely different point: cognitive ability shows high heritability because environmental factors that influence it—such as social norms around education, parenting style, household stability, and academic aptitude and drive—are themselves heritable.

Although the authors identify and describe a compelling mechanism for the influence of environment on cognitive ability, they seem to mistake the causal direction of its operation in their interpretation. Rather than an abundance of stimulating experiences and environments improving an individual’s actualisation of their inherited potential, as is implied, a more parsimonious explanation is that a shortage of these influences suppresses achievement of genetic potential. Their base assumption is made clear on p.3: “The ‘end state’ of this transactional process—high levels of and high heritability of cognitive ability—is therefore expected to differ depending on the quality and availability of environmental experiences.” Emphasis here is on the the end state, particularly the high levels of cognitive ability. In other words, those who demonstrate the higher heritability of cognitive ability also demonstrate higher levels of that ability. High cognitive ability is more closely linked to parent performance than low cognitive ability. If this end state were high heritability of cognitive ability alone, then their argument would hold. However, the association of “high levels of” cognitive ability in this outcome does not follow from their model, and indicates that the observations could be better explained as a default outcome; Ockham’s Razor is well applied. This critical flaw in their proposal is laid bare in their only addressed counter-argument, that their model breaks down in countries with stronger education systems and social programs. Rather than individuals’ ability to self-select unique experiences reinforcing a pattern of gene expression and experience selection to cyclically maximise expression of inherited cognitive ability, it is a much simpler explanation that instead a paucity of these experiences suppresses the expression of inherited cognitive ability.

References

Fryers, T., & Brugha, T. (2013). Childhood Determinants of Adult Psychiatric Disorder. Clinical Practice and Epidemiology in Mental Health : CP & EMH, 9, 1–50. http://doi.org/10.2174/1745017901309010001

Tucker-Drob, E. M., Briley, D. A., & Harden, K. P. (2013). Genetic and Environmental Influences on Cognition Across Development and Context. Current Directions in Psychological Science, 22(5), 349–355. http://doi.org/10.1177/0963721413485087

A Mac Pro Proposal

Since the Future of the Mac Pro roundtable two weeks ago, there’s been a lot of chatter in the Pro user community analyzing  the whole situation. Marco Arment has a pretty good overview of people’s reactions in which he makes a strong case for the value of the Mac Pro as a top-of-the-lineup catchall for user needs that just can’t be met by any other Apple hardware offering. In general, Pro users seem fairly optimistic. This is a rare case of Apple engaging with its most vocal users outside of the formal product release and support cycle, and people seem to recognize its value.

However, although many commenters have variously strong opinions about their own needs (and speaking from experience is critical to help Apple understand the diverse use cases of people who buy Mac Pros), there hasn’t been a lot published on how exactly Apple could address these user needs in real products. Drawing on the specific and general demands of various Pro users, and pulling together syntheses breaking down what main categories of users have what needs, I have a proposal for the new Mac Pro technical features and product lineup. (And it’s not a concept design rendering; I’ll leave the industrial design up to Apple.) Fair warning: this is a moderately lengthy and technical discussion.

The most important thing to understand about Pro users is, as Marco Arment explains, their diversity. Pro users need more than any other device in the consumer Mac lineup can offer, but what exactly they need more of varies greatly. Since it’s such a good breakdown, I’ll quote him:

  • Video creators need as many CPU cores as possible, one or two very fast GPUs with support for cutting-edge video output resolutions (like 8K today), PCIe capture, massive amounts of storage, and the most external peripheral bandwidth possible.
  • Audio creators need fast single-core CPU performance, low-latency PCIe/Thunderbolt interfaces, rock-solid USB buses, ECC RAM for stability, and reliable silence regardless of load. (Many also use the optical audio inputs and outputs, and would appreciate the return of the line-in jack.)
  • Photographers need tons of CPU cores, tons of storage, a lot of RAM, and the biggest and best single displays.
  • Software developers, which Federighi called out in the briefing this month as possibly the largest part of Apple’s “pro” audience, need tons of CPU cores, the fastest storage possible, tons of RAM, tons of USB ports, and multiple big displays, but hardly any GPU power — unless they’re developing games or VR, in which case, they need the most GPU power possible.
  • Mac gamers need a high-speed/low-core-count CPU, the best single gaming GPU possible, and VR hardware support.
  • Budget-conscious PC builders need as many PC-standard components and interfaces as possible to maximize potential for upgrades, repairs, and expansion down the road.
  • And more, and more

I translated this to a table, for clarity. First, a caveat: “software developers” refers to general consumer software. Software developers who work in the other listed fields have all of the same requirements as that field in addition to the general requirements: a game developer needs a good gaming GPU, a video tools developer needs lots of cores and dual GPUs, etc.:

Pro Type CPU GPU RAM Expansion
Video High Core Count Dual GPU Max Lots
Audio Fast Single Core Low Max ECC Some
Photography High Core Count Low Lots Little
Software High Core Count Low Lots None
Games Fast Single Core Fast Single GPU Lots None

This is a lot clearer, and we can see some trends. For CPU requirements, Pros generally either need fast single core performance or good multi-core performance. For GPU requirements, Pros generally either need fast single GPU performance or good dual-GPU performance. Everyone needs a lot of RAM; some need ECC RAM. Some need a lot of expansion, and others need none.

The best divider for Pro users appears to be around CPU: either fast single core CPU or high core count CPU needs. GPU needs are more variable, pretty much everyone needs a lot of RAM, and a few users need chassis expandability. The most demanding overall users are video editors, who need not only lots of CPU cores, dual GPUs, and huge amounts of RAM (128GB, if not more), but also demand a lot of internal hardware expansion for video interface/capture cards, audio interface/capture cards, and data communication cards (fiber channel). I say “internal expansion” because the reality for these users is that 1) their hardware is niche, expensive, and slow-moving thus requiring PCIe form factor support; 2) Thunderbolt 3 and other protocol adoption is slow in the industry and not available natively or just not workable in external enclosures for many applications; and 3) having stacks of external devices and enclosures, on top of other specialized hardware, is unwieldy, expensive, and ugly.

There are some other requirements that most Pro commenters noted as well:

  • Thunderbolt 3 is great, but not everyone needs as many ports. A couple folks have noted a desire for 8 Thunderbolt 3 ports at full speed, but this takes up valuable bus bandwidth and eats into PCIe capacity, so 4x might be offered as well.
  • Even accepting Thunderbolt as the future, and having a bunch of USB-C/Thunderbolt 3 full compatibility ports, there are still many USB-A devices in the world. Some of these are legacy hardware that doesn’t play nice with USB adapters and hubs. So a couple USB-A legacy ports would be nice. Speaking of USB-C, all provided Thunderbolt ports should support full Thunderbolt 3/USB-C functionality.
  • Whisper quiet operation is assumed. Besides audio applications where it is a necessity, nobody likes having a jet take off near their workspace. The fastest single GPU applications can accept a little fan noise, but it should be limited as much as possible. Whether this takes the form of more clever thermal management or liquid cooling makes no difference as long as it doesn’t restrict the thermal footprint of the device. Thermal capacity restriction is one of the primary failings of the 2013 design, identified by Pro users, engineers, and recognized by Apple itself.
  • Nobody cares about having an optical drive (the 2013 design had this right), but if there is one it damn well better be an industry standard component and support Blu-Ray without janky third-party stuff. The inclusion of really excellent DVD decoding and playing software in the Mac was a big deal, and the lack of this software for Blu-Ray is making the format a massive PITA for professionals and consumers alike. Physical media may be dying, but it’s not dead yet. A nice solution here would be a refresh to the SuperDrive: update it to TB-3/USB-C and make it read and write BluRay (or make two: one that reads and writes CD/DVD and one that reads and writes Blu-Ray in addition).
  • Likewise, space for 3.5″ spinning platter hard drives is not important. The 2013 design made the right call on this: they’re too slow for Pro use, and there’s no need to have large/archival storage capacity within the device as long as there’s enough SSD scratch space for active projects.
  • The audio output combo jack (optical + 3.5mm TRS) is brilliant and convenient. However, many users valued the dedicated audio input jack and would like it back. Make it also a combo jack with optical and 3.5mm TRS, and support line and mic level input. This doesn’t have to be a balanced XLR connector with phantom power, just a mic/line/optical combo jack.
  • Finally, and perhaps most critically: all BTO components should be industry standard and third-party compatible. This means processor(s), GPU(s), RAM, SSDs, and standard PCIe cards. Being stuck with whatever BTO configuration they initially order is a huge, deal-breaking inconvenience for many self-employed Pro users. It’s insulting to their knowledge of their own unique product needs and comes off as absurdly greedy. Not only is the Mac market a small fraction of Apple’s revenue, the top end Pro segment is absolutely miniscule. Nickel-and-diming Pro users by charging exorbitant rates for high end BTO configurations, Apple-certified-exclusive hardware upgrades, and incompatible parts lock-in is nothing less than stupid and comes off as incredibly arrogant. Pros are willing to pay a premium for premium components, but not for huge markups on things they won’t even use in their industry (like audio pros saddled with useless dual GPUs).
  • Besides the consumer-facing arguments for using standard components, there’s also a strong technical argument to be made: a big part of the 2013 Mac Pro’s stagnation has been a lack of consistent updates from Apple, and the complete inability for third parties to fill this void. Using industry standard components makes it easier for Apple to offer consistent product updates to the latest industry offerings (requiring less R&D on building custom components), and for consumers to upgrade their existing devices as the technology advances. This is to everyone’s benefit.

Finally, I’m not here to design enclosures, only to outline purely technical requirements. How Apple chooses to package any such device(s) I am deliberately leaving open ended. I think that the 2013 Mac Pro enclosure redesign was brilliant, aesthetically and functionally. It failed not in the eyes of the Pro community because of its appearance, size, or clever design, but in their workflows and environments because it did not meet their diverse technical needs. Any new device does not have to be a return to the iconic “cheese grater” tower, but it needs to address the technical needs that I identified above.

Perhaps most of all, it needs to make products like Sonnet’s xMac Pro Server enclosure unnecessary for non-enterprise users. While such a product fine for the datacenter and for server applications (I’m not going to go into the demand for proper Mac server hardware here), the fact that a $1,500 enclosure is the most convenient way to get back functionality that came standard in the previous generation device is obscene. I’m referring, of course, to user upgradeable (and expandable) storage and PCIe card support. Even for that much it is inadequate for GPUs, since it only offers PCIe 2.0 slots. A rack form factor is not appropriate for a significant segment of Pro users, and requiring any unwieldy external enclosure for hardware expansion is ridiculous to the point of obscenity in the face of the entire rest of the desktop computer hardware market.

With these considerations in mind, I’ve come up with a model lineup that I believe provides the best balance between flexibility/capacity and reasonable excess: meeting the highly variable needs of the Pro market segment without making everyone buy the same super-in-every-way machine. I propose two motherboard-distinct configurations, which may or may not ship in the same enclosure; I don’t care, and it doesn’t matter to Pro users as long as their technical needs are met. Without further ado:

Configuration 1: Emphasis on Audio, Games, & Software

  • Single socket latest generation Intel processor. Configurable 4 to 8 cores.
  • 4x DDR4 ECC RAM slots. Configurable 16GB up to 512GB. Make the base configuration 1x16GB so users don’t have to toss their factory RAM to upgrade a little.
  • 2x PCIe 3.0 x16 dual-width standard slots. Configurable with choice of graphics card(s) (None, Single, Dual). Maybe one or two x8 slots as well.
  • 4x Thunderbolt 3/USB-C ports.

Configuration 2: Emphasis on Video, Photography, & Software

  • Dual socket latest generation Intel processors. Configurable with one or two sockets used, 4 (single socket) to 16 cores (dual socket).
  • 8x DDR4 ECC RAM slots. Configurable 16GB up to 1024GB. 1x16GB min. configuration, again.
  • 4x PCIe 3.0 x16 dual width standard slots. Configurable with choice of graphics card(s) (None, Single, Dual). Maybe two or four x8 slots as well.
  • 8x Thunderbolt 3/USB-C ports.

Offering two distinct motherboards with two distinct levels of capability provides the kind of configuration flexibility that has been identified as crucial for the Pro device market. Audio editors aren’t locked into a dual-socket, dual-GPU device that they won’t be able to take advantage of; video editors can get the dual GPU capabilities they need; and VR and Games developers can get high temperature single GPUs that are typical in their industries.

However, this is where the distinctions end. Offering too many differences between models is both too much R&D work, and also a recipe for crowding users into a configuration that will have either more of what they don’t need or less of what they do. I therefore propose that the devices be feature equivalent in the following ways:

  • Dual 10G-BaseT Ethernet. Enough professionals work in data-heavy environments that bandwidth for storage interfaces is a big issue. Dual-bonded 10Gb offers 20 gigabit throughput for a lot less cost than fiber channel, and offering it built in frees up a PCIe slot for Pros who would otherwise use fiber channel (or bonded 10GbE). Even at single 10Gb there are workflows in video, audio, and photo editing which are not possible with 1Gb. Users are becoming familiar with the speed experience of SSDs, and Pros need that speed in their networking too. Heck, USB 3.1 is 10Gb signaling. Many will only ever use one port, but providing two is the right choice.
  • Bluetooth 4.2/4.1/BLE/4.0/3.0/2.1+EDR and 802.11ac WiFi. No comment.
  • 4x M.2 PCIe NVMe SSD slots. Of course the slots could be used for other things, but the primary purpose is to access industry-standard SSDs, which the user can upgrade and expand. Although enterprise can stand shelling out big bucks for Apple SSDs BTO, nobody can stand having them soldered to the board or otherwise proprietary. Four slots should be sufficient; Pros with bigger data needs than that often have or would benefit from external RAID and NAS (especially with those 10GbE ports spec’d above). The working storage within a Pro device should be the fastest available, and of sufficient working capacity to allow video, audio, and other asset-heavy Pros to use the fastest local data storage for their project workspace. Larger capacity data storage (>4TB) is a niche market, and a strong argument can be made that these users’ needs for archival and reference storage are better met with outboard solutions, which are becoming financially and technically accessible. This is one of the driving justifications for including Dual 10GbE, to allow the fastest access to economical network-attached storage. These slots need to support RAID for drive redundancy and speed, including on boot drives. Folks also mention the value of having them for Time Machine. 8TB total capacity has been mentioned, and seems like a reasonable (if still quite expensive) upper bound. So the idea is that you get fast boot for the OS and a good chunk of fast scratch space if you’re working with large assets. Being able to have a complete local copy of large Premiere projects (or at least sufficiently large chunks of them) is, I’ve heard, invaluable.
  • 2x HDMI 2.1. HDMI 2.1 is shaping up as the future of high resolution display connection, and having HDMI for compatibility is necessary anyways. Of course support multichannel audio out via HDMI. I also understand that an Apple branded display is returning. Many Pros benefit from having dual displays, and offering an Apple branded display with HDMI connectivity would expand the market for such a device outside of the Mac sphere, and would also free up TB3 ports for other bandwidth-intensive peripherals. Apple displays are recognized among video and photography Pros as some of the best in the market, and are still widely used even by non-Mac users. It seems reasonable to offer them in both unadorned base HDMI configuration and slightly more expensive TB3 (with backside USB/hub) versions.
  • 2x USB-A 3.1. Put one of them somewhere easily accessible (for USB flash drives and convenience peripherals). Put one on the back for legacy devices for the handful of users who need them. USB-C/Thunderbolt 3 is the future of peripherals; Pros understand this, and adapters from USB-C to USB-A are cheap and not too inconvenient, so there’s no need to weigh down the device with USB-A legacy ports.
  • Dedicated Audio In/Audio Out optical/analog combo jacks, with line/mic support. Many people have lamented the loss of the dedicated line in jack. Making it an optical and line/mic combo would be fantastic. For day-to-day use, put a headset combo jack on the front panel/same as the USB-A legacy port: plugging headphones into the back of a device sucks, as does plugging in a USB flash drive.
  • SD card combo reader. This is a convenience feature for photography and videography professionals. It’s a concession to their trade, and should be placed conveniently on the front of the device if included. However, I understand if it’s not included.
  • RAM should be ECC. This is the Pro market, and enough folks would benefit to make it standard.
  • Also, RAM should be DDR4. There’s a few percent latency penalty, but the significant energy (aka heat) savings, access to higher capacity DIMMs, and modern processor support make it time for the switch. Although theoretically possible, there is no DDR3 module on the market sporting >16 GiB capacity, and in fact no consumer motherboard ever manufactured that could utilize such a module. There are, however, as much as 128GB DDR4 DIMMs on the market right now. They cost an arm and a leg, but they exist. Producing computers capable of utilizing these modules would increase demand, this increase their availability and decrease price.
  • PCIe card slots should support all industry standard devices, including GPUs. The devices’ provided GPUs should also use these slots. No weird form factors, inaccessible mounting locations, or failure to support double wide cards. The number of slots a user anticipates needing shall inform their choice of build; since GPU needs are so highly variable it is reasonable to offer variety.

This provides two primary configurations, which have distinct motherboards and distinct capacities. They’re similar enough to be housed in the same enclosure, or distinct enough to warrant different designs. Like I said, I’m not a designer, and whether the device meets the demands of Pro users comes down to whether it supports the technical features they need. It doesn’t matter if the PCIe slots are vertical, horizontal, sideways or backwards, or even if they’re mounted on opposite sides of the enclosure as long as they meet the technical and functional needs outlined above. Likewise for the RAM and all other components.

I was inspired to this approach by a comment from a moderator over at tonymacx86, which suggested Apple release a mini-ITX form factor device. It would be great for the Mac gaming community and many enthusiasts/independent pros to have an even smaller, cheaper “Pro” desktop. Here’s how such a device might look:

  • Single socket Intel processor. Configurable 2 or 4 cores.
  • 2x DDR4 ECC RAM slots. Configurable 16GB to 256GB. Make the base configuration 1x16GB so users don’t have to toss their factory RAM to upgrade a bit.
  • 1x PCIe 3.0 x16 dual-height slot. Configurable with choice of graphics card (None, Single). Maybe one /8 slot as well (or dual-purpose an M.2 slot).
  • 2x Thunderbolt 3/USB-C ports.

I imagine this device might sport dual 1GbE instead of dual 10GbE to lower the price point, and 2x M.2 slot instead of 4x. The latest generation of Gigabyte’s Gaming-oriented mini-ITX motherboard boasts many of these features.

However, the needs of the middle and high end of the Pro market would not be met by such a device, and offering only it and the higher spec outlined above is too much space between their capabilities. Obviously the higher spec device is a must, because if it doesn’t exist then those Pros are jumping ship for custom built PCs. So I consider the focus for development should go to the two builds described above. This third configuration/model is more of an “I wish” product.

Here’s a feature comparison matrix, referring to the three devices by their most-similar PC form-factor names.

mini-ITX Mid Tower Full Tower
PCIe 3.0 x16 Dual-Width Slots

1

2

4

GPUs Integrated, Single Integrated, Single, Dual Integrated, Single, Dual
CPU Sockets

1

1

2

CPU Cores (max)

6

8

16

RAM Slots

2

4

8

Max RAM 256GB 512GB 1024GB
Thunderbolt 3/USB-C

2

4

8

PCIe M.2 Slots

2

4

4

LAN Dual 1GbE Dual 10GbE Dual 10GbE
HDMI 2.1

1

2

2

Comments are open. If you’re a Pro Mac user, please let me know which configuration you would choose, what BTO options you would prefer, and what third-party hardware and peripherals you would connect. I’d like to get more direct requirements from actual users so I can better refine this proposal to meet everyone’s needs.

Crossflash IBM M1015 to LSI 9220-8i IT Mode for FreeNAS

The IBM M1015 is a widely available LSI SAS2008-based RAID controller card. It is an extremely popular choice for home and enthusiast server builders, especially among FreeNAS users, for its low price point (~$60 US secondhand on eBay) and excellent performance.

In essence, it’s hardware equivalent to the LSI 9211-8i; officially, it’s the 9220-8i, sold to OEMs to be rebadged. Two SFF-8087 mini-SAS quad-channel SAS2/SATA3 ports, no cache, no battery backup. Cross-flash it to LSI generic firmware in IT mode, they say, and you get an excellent SATA III HBA on the cheap. Turns out that’s easier said than done, especially if you’re working with a recent consumer motherboard.

The comprehensive, only slightly dated, instructions are here. Ironically, I only found them after I had pieced together the procedure for myself.

At this point, FreeNAS 9.10 is compatible with version P20 firmware. User Spearfoot on the FreeNAS forums has a package containing the utilities and firmware files. I’ve also attached it to this post: m1015.

Notes:

  • If your motherboard lacks an easily accessible EFI shell, use the one in rEFInd.
  • If you get the error “application not started from shell”, that’s an EFI shell version compatibility issue. Use the shell provided in the link.
  • “No LSI SAS adapters found!” from sas2flsh.exe indicates that likely IBM firmware is still present. Use megarec to erase it.
  • “ERROR: Failed to initialize PAL. Exiting Program.” means your motherboard is not compatible with the DOS sas2flsh. Use the EFI version.

Additional References:

  • GeekGoneOld on the FreeNAS forums has a quick guide: #18
  • And a useful reply in that same thread: #28
  • Redditor /u/PhyxsiusPrime describes the EFI shell compatibility workaround via rEFInd here.

Automatically Update Your Copyright Notice

Who wants to manually update the copyright date on their active blog? Nobody. Even though it’s best practice to keep your copyright date current with the last revision of a website, this can be difficult with an active blog.

Worse is the fact that it only has to be done once annually. If your blog is active and you’re paying attention, there’s at least a few days delay while you recover from the New Year before you bother updating the copyright date.

Everything else on the website is generated dynamically, so why not the copyright date? Sure, we could just have it display the current year, and that’s fine as long as you update the site at least once every year, but it’s not guaranteed. Besides, there’s an even cooler way, at least if you’re using WordPress. It involves the descriptively named get_lastpostmodified() function and PHP >=5.3’s DateTime class:

<small>©<?php $datetime = new DateTime(get_lastpostmodified());
         echo $datetime->format('Y'); ?> [Name]</small>

First, we grab the timestamp of the most recent update to any post. Then we feed that into a new DateTime object. Finally, we print out just the year.

Unfortunately, get_lastpostmodified() does not support UTC, only GMT, otherwise I’d use that. Annoyingly, the WordPress devs seem to consider these two time zones interchangeable. For our purposes, GMT and UTC are close enough, and the resultant code is quite similar:

<small>©<?php $datetime = new DateTime(get_lastpostmodified('gmt'), new DateTimeZone('gmt'));
         echo $datetime->format('Y'); ?> [Name]</small>

And there you go, a dynamically updating copyright notice suitable for your blog footer, that will keep the date current with the date of the last update to your blog.

WordPress Custom Taxonomy Archive for Custom Post Types

Part of a series of posts on advanced WordPress customization, this article covers the implementation of a WordPress Custom Taxonomy archive for Custom Post Types.

Desired Effect

What we want in the end are URLs that look like example.com/post-type/taxonomy-name/taxonomy-term leading to a listing of Custom Post Type objects. It’s like example.com/blog/tag/tutorial, but with Custom Post Type and Custom Taxonomy instead of the built-in “post” post type and “tag” taxonomy. Posts should reside at example.com/post-type/%postname%/.

Approaches

There are a couple approaches to implementing this functionality in WordPress. I’ve outlined the two potential approaches below, and then explain my solution.

Custom Permastruct, Manual Custom Query

One could use a Custom Field on the Custom Post Type to store the taxonomy name and term in a key-value store. One could then create a new custom rewrite tag via add_rewrite_tag() to allow rewriting of taxonomy-name/taxonomy-term to a GET query variable. Finally, one could write a plugin and/or customize their theme to extract the newly-added query variable and use it to construct a custom SQL query or WP_Query() object. This would hijack The Loop causing it to only return appropriate posts.

Drawbacks, Other Notes: This approach relies on a Custom Field on each Custom Post with the proper name and value. This means if something is wrong there, the post won’t show up properly. There is no unified system to view all the values of that Custom Field across all posts, and no way to automatically rename them if the need arose. One could use the Advanced Custom Fields plugin to ameliorate some of these issues, but it’s not ideal. Furthermore, this system requires heavy customization of the Theme, which makes it much less portable. Not only with the custom query parsing and execution, but you must also write a custom generator for the taxonomy term links and such.

Hijack Existing Functionality in register_post_type() and register_taxonomy()

register_post_type() and register_taxonomy() already make calls to add_permastruct(), add_rewrite_tag(), and add_rewrite_rule(), so why can’t we hijack those to achieve the desired result? It turns out that you can, but it’s not as straightforward as you might expect. Details below.

Drawbacks, Other Notes: This approach utilizes the taxonomy system, and so benefits from all of the functionality and UI already written for it. Furthermore, it does not require such heavy customization of the theme, making it more portable. So far, the only drawback is that I haven’t figured out how to support multiple taxonomies.

Solution

It took a lot of digging, but I finally located most of the solution on the WordPress StackExchange. It relies on a little bit of under-documented functionality in register_post_type() and register_taxonomy() to hijack the behind-the-scenes calls to add_permastruct() and friends and achieve the desired outcome. I documented this solution on the WordPress support forums. The first step is to register the custom taxonomy with a few special modifications to the $args, particularly rewrite:

// Register Custom Taxonomy
function example_taxonomy_init()  {
    // All Labels as usual
    $labels = array(
        'name'                       => 'Example Taxonomy',
        'singular_name'              => 'Example Taxonomy',
        'menu_name'                  => 'Example Taxonomy',
        'all_items'                  => 'All Example Taxonomies',
        'parent_item'                => 'Parent Example Taxonomy',
        'parent_item_colon'          => 'Parent Example Taxonomy:',
        'new_item_name'              => 'New Example Taxonomy Name',
        'add_new_item'               => 'Add New Example Taxonomy',
        'edit_item'                  => 'Edit Example Taxonomy',
        'update_item'                => 'Update Example Taxonomy',
        'separate_items_with_commas' => 'Separate example taxonomies with commas',
        'search_items'               => 'Search example taxonomies',
        'add_or_remove_items'        => 'Add or remove example taxonomies',
        'choose_from_most_used'      => 'Choose from the most used example taxonomies',
    );
    // IMPORTANT!
    $rewrite = array(
        'slug' => 'example-custom-post/example-taxonomy', // this kicks add_permastruct
        'with_front' => false, // we don't want this appearing under /blog/ or something
    );
    $args = array(
        'labels'                     => $labels,
        'hierarchical'               => true, // behave like pages, not posts; I have not tested post-style archives, but they should work
        'public'                     => true,
        'show_ui'                    => true,
        'show_admin_column'          => true,
        'show_in_nav_menus'          => true,
        'show_tagcloud'              => true,
        'rewrite'                    => $rewrite,
    );
    register_taxonomy( 'example-taxonomy', 'example-custom-post', $args ); // we only want this taxonomy to link to our custom post type, nothing else
}
// Hook into the 'init' action
add_action( 'init', 'example_taxonomy_init', 0 );

What this does is hijack the call to add_permastruct() on line 375 of taxonomy.php in the register_post_type() definition which WordPress generates the taxonomy archive page from. We also hijack add_rewrite_tag() on like 374 which creates the rewriting tag that we reference next. slug must be in the form <custom-post-type-name>/<custom-taxonomy-name> where <custom-post-type-name> is the value of the $post_type parameter when you call register_post_type() and <custom-taxonomy-name> is the value of the $taxonomy parameter in register_taxonomy(). This is the call that actually creates the custom taxonomy archive. Then we register our custom post type, with a few modifications to $args, particularly to rewrite and has_archive:

// Register Custom Post Type
function example_custom_post_init() {
    // All Labels as usual
    $labels = array(
        'name'                => 'Example Custom Posts',
        'singular_name'       => 'Example Custom Post',
        'menu_name'           => 'Example Custom Posts',
        'parent_item_colon'   => 'Example Custom Post:',
        'all_items'           => 'All Example Custom Posts',
        'view_item'           => 'View Example Custom Post',
        'add_new_item'        => 'Add New Example Custom Post',
        'add_new'             => 'New Example Custom Post',
        'edit_item'           => 'Edit Example Custom Post',
        'update_item'         => 'Update Example Custom Post',
        'search_items'        => 'Search example custom posts',
        'not_found'           => 'No example custom posts found',
        'not_found_in_trash'  => 'No example custom posts found in Trash'
    );
    // IMPORTANT!
    $rewrite = array(
        'slug'                => 'example-custom-posts/%example-taxonomy%', // keep the % characters
        'with_front'          => false,
    );
    $args = array(
        'label'               => 'example-custom-post',
        'description'         => 'An example custom post type',
        'labels'              => $labels,
        'supports'            => array( 'title', 'editor', 'excerpt', 'thumbnail', ),
        'taxonomies'          => array( 'example_taxonomy' ), // link to our custom taxonomy
        'hierarchical'        => false, // behave like tags, not categories; can be either, but I have not tested hierarchical taxonomy archives
        'public'              => true,
        'show_ui'             => true,
        'show_in_menu'        => true,
        'show_in_nav_menus'   => true,
        'show_in_admin_bar'   => true,
        'menu_position'       => 5,
        'menu_icon'           => '',
        'can_export'          => true,
        'has_archive'         => 'example-custom-posts', // IMPORTANT!
        'exclude_from_search' => false,
        'publicly_queryable'  => true,
        'capability_type'     => 'page',
        'rewrite'             => $rewrite
    );
    register_post_type( 'example-custom-post', $args );
}
// Hook into the 'init' action
add_action( 'init', 'example_custom_post_init', 0 );

Since we’ve already tied the custom post type and custom taxonomy together in their initialization, we don’t have to call register_taxonomy_for_post_type(). Modifying $rewrite['slug'] allows us to hijack the call to add_rewrite_rule() on line 1309 of post.php with %example-taxonomy%, which is the name of the rewrite tag we created in the call to register_taxonomy() above; this rewrites example.com/example-custom-posts/example-taxonomy/term/post-name to example.com/example-custom-posts/post-name, though we could change that. Importantly, we must set has_archive to something other than true, this creates the regular custom post type archive index (see here). Last, we hijack add_permastruct() again to create the regular index for the custom post type. It took me a while to figure out exactly why and how this worked, and I hope that by documenting it here others may benefit.

“I don’t see Apple dramatically improving or replacing Objective C anytime soon.”

Jason Brennan, in February of 2014:

I don’t see Apple dramatically improving or replacing Objective C anytime soon.

Five months later:

Apple introduced a boatload of new consumer features for OS X and iOS today, but one of the biggest announcements for developers could be its new programming language, Swift. Craig Federighi just announced it, saying that Apple is trying to build a language that doesn’t have the “baggage” of Objective-C, a programming language that came from NeXt that has formed the basis of OS X and eventually iOS.

I’ll have this in a bread bowl.

This new “Apple SIM” could legitimately disrupt the wireless industry

Perhaps the most interesting news about Apple’s new iPad Air 2 tablet is buried at the bottom of one of its marketing pages: It will come pre-installed with a new “Apple SIM” card instead of one from a specific mobile operator.

If anything can disrupt the carrier marketplace, this is it.

It’s early, but it’s easy to see how this concept could significantly disrupt the mobile industry if Apple brings it to the iPhone. In many markets—especially the US—most mobile phones are distributed by operators and locked to those networks under multi-year contracts. People rarely switch operators, partially out of habit and satisfaction, but mostly because it’s annoying to do so.

Carrier lock-in is the bane of market competition. The ability to easily switch carriers and preserve your cellular identity, keeping not only your device but your existing number, without anything more than that device itself—this is the holy grail for carrier customers.

Via @theloop.

Excellent Sheep

I recently had the opportunity to attend a talk by William Deresiewicz, author of the recently published Excellent Sheep: The Miseducation of the American Elite and the Way to a Meaningful Life. This piece was prompted by his comments on the direction and decisions of today’s young people as they approach their majority and contemplate higher education.

If excellence is the goal, “Education” is the obstacle.

To the capable and the driven, who today account for a great number of the sons and daughters of Middle Class America, an elite education is the stairway to success. It is, in fact, the essence of the Middle Class Dream. While the American Dream is the broad belief that success can be achieved through hard work and perseverance, the Middle Class Dream applies that belief to the following premise: if the most successful people come from elite universities, then the best path to success is through those universities. It is the belief that an elite education is the road down which hard work travels to become success. Unfortunately, the numbers back this belief: according to the  U.S. Department of Education’s National Center for Education Statistics, higher education correlates strongly with higher income. The Center writes, “In 2012, young adults with a bachelor’s degree earned more than twice as much as those without a high school credential…”

A pragmatic parent’s response is obvious: send your kids to college to insure their future. Send them to the best college they can get into, because the bigger the name on their résumé, the better their prospects are in the job market. College is an investment—insurance against economic uncertainty—and, as in finance, the bigger the initial investment, the larger the return. The goal is no longer learning, the goal is achievement and the appearance of educational excellence. Education has become a system and, like any system, the most successful are those within it who play the game to their advantage. As Deresiewicz writes in his essay “Ivy League Schools are Overrated”, elite education is structurally opposed to intellectual curiosity: “Students are regarded by the institution as “customers,” people to be pandered to instead of challenged. Professors are rewarded for research, so they want to spend as little time on their classes as they can. The profession’s whole incentive structure is biased against teaching, and the more prestigious the school, the stronger the bias is likely to be. The result is higher marks for shoddier work.” Students avoid risk like the plague, meeting the requirements of courses but without the freedom to explore the content or develop a true understanding of it. Deresiewicz continues, “Once, a student at Pomona told me that she’d love to have a chance to think about the things she’s studying, only she doesn’t have the time.” This extends down into the admissions process and from there into high schools. Getting into an elite college is an act of perseverance: forego free time, abandon excellence, and throw yourself bodily into the process of gaming the higher education system and filling every box on the admissions offices’ checklists. I know this because some of my friends did it: they played the education system and were admitted to Harvard, MIT, Stanford, Princeton, and Pomona (among others; whether or not they enrolled there is another story).

Society believes that education produces excellent individuals. Excellent individuals are capable, principled, and interesting. Elite education, then, does not produce excellent individuals. It produces, to use Deresiewicz’s book title, “Excellent Sheep.” As he writes in the book, “Our system of elite education manufactures young people who are smart and talented and driven, yes, but also anxious, timid, and lost, with little intellectual curiosity and a stunted sense of purpose: trapped in a bubble of privilege, heading meekly in the same direction, great at what they’re doing but with no idea why they’re doing it.” People like this would only ever be called “interesting” by a psychologist studying unbalanced personalities.

What, then, does it mean to be interesting? We talk about well-educated people being interesting and engaging. They say and do interesting things. One of my father’s college professors once told him, “The most interesting people know the most interesting things.” Of course the most interesting people have the most interesting knowledge. But how do they get it? How do they become interesting? How do they achieve this most sought-after personality trait? Was it something they read? Was it something they heard? Was it the college they went to? People search after these answers, but never find them. I think it’s because they can’t be found. There is no book you can read to become interesting, no lecture you can attend. There is no college that can bestow it upon you at graduation, for no school can collect books that don’t exist and offer lectures that can never be heard.

If there is one thing I have learned in my meager time on this earth, it is that interesting is not a status you can achieve. Interesting is not a badge you can earn, a box you can check off, an award you can receive, nor an a rank you can attain. Interesting is a state of being. It is a process, continuous; it is the very act of being interested. “The most interesting people know the most interesting things,” not because they learn them to become “interesting”, but because they study that which they find interesting. Being interesting is being intellectually curious, exactly what our elite universities prevent. And conveniently, the most interesting people (by this definition) are also some of the most successful in life. They are excellent individuals, not just excellent sheep.


If you’re on Medium, this story was also published there.