Smart TVs Enable Creepy Ads That Follow You

Share Button

Since as early as 2013, the misleadingly-named, San Francisco-based Free Stream Media Corp. has touted smart TV software capable of detecting what you’re watching. Initially marketed as a social tool to drive viewer engagement, the software has morphed into an Orwellian advertising spy machine. Called “Samba TV” since its debut at CES in 2013, the software comes pre-installed on select Smart TV sets from a dozen manufacturers, including Sharp, Toshiba, Sony, and Philips. Claiming to provide consumers who opt in with “recommendations based on the content you love”, the software in fact monitors everything displayed on the TV to identify not only broadcast advertisements but also streaming services and even video games and internet videos.

This data is then distributed to advertisers in real time. The result: creepy targeted ads that know what you’re watching.

Christine DiLandro, a marketing director at Citi, joined Mr. Navin at an industry event at the end of 2015. In a video of the event, Ms. DiLandro described the ability to target people with digital ads after the company’s TV commercials aired as “a little magical.”

This accomplishment is a result of Samba’s “device map”, which appears to utilize a combination of local network exploration and mobile device fingerprinting to identify smartphones, tables, and other computers in the same household as an enabled Smart TV. This allows the company to target ads to other devices based on what’s on TV.

Presumably they’re also building a profile of your viewing habits to sell to advertisers as well. Yikes.

US Cell Phone Carriers Sell Your Location, Without Permission

Share Button

In May, the New York Times reported on a private company that purchased bulk user location data from US cellular carriers and then re-sold individual location data to law enforcement in a blatant violation of customer privacy and legal due process:

The service can find the whereabouts of almost any cellphone in the country within seconds. It does this by going through a system typically used by marketers and other companies to get location data from major cellphone carriers, including AT&T, Sprint, T-Mobile and Verizon, documents show.

US Sen. Ron Wyden (D-Ore.) took action the next day, calling on carriers to discontinue selling subscriber data to so-called “location aggregators”. So far AT&T, Verizon, Sprint, and T-Mobile have responded, issuing statements of intent to cut ties with location middlemen. Whether they will continue to share subscriber location data without explicit and affirmative consent remains to be seen. Congressional Republicans show no interest in preventing them:

“Chairman Pai’s total abandonment of his responsibility to protect Americans’ security shows that he can’t be trusted to oversee an investigation into the shady companies that he used to represent,” Wyden said. “If your location information falls into the wrong hands, you—or your children—can be vulnerable to predators, thieves, and a whole host of people who would use that knowledge to malicious ends.”

FCC Chairman Ajit Pai represented Securus in 2012. More information from ArsTechnica, who report that Obama-era regulations were blocked by Congress that would have prevented this kind of behavior.

Tapplock Is Basically Worthless

Share Button

Recently-kickstarted Tapplock touts a Bluetooth-enabled smart lock that uses a fingerprint sensor. The company came under fire from tech-savvy commentators when popular YouTuber JerryRigEverything completely disassembled and defeated in a matter of minutes using a screwdriver and adhesive pad. This attack appears to be related to a quality control problem with the specific unit he used; a spring-loaded shear pin is supposed to prevent the back from rotating. It’s unclear whether that pin can be easily snapped or retracted, for example with a string magnet, but it turns out that doesn’t matter. UK-based security researchers PenTestPartners:

The only thing we need to unlock the lock is to know the BLE MAC address. The BLE MAC address that is broadcast by the lock.

The security credentials used to control the lock are derived from the device’s publicly broadcast identifier. This means that every single lock is vulnerable to an attack that can be carried out with a smartphone app:

I scripted the attack up to scan for Tapplocks and unlock them. You can just walk up to any Tapplock and unlock it in under 2s. It requires no skill or knowledge to do this.

Can it get worse? Yes, it can. Responding to the researcher’s security disclosure, Tapplock reportedly said:

“Thanks for your note. We are well aware of these notes.”

Be wary of Internet of Things (IoT) “smart” security devices. The are neither smart nor secure.

Enable Searching of SMB Shares on Freenas under macOS

Share Button

One frustrating shortcoming of accessing SMB shares from macOS is the default failure of directory indexing for file searching. You simply can’t use the normal Finder “Search” field to do anything. This makes it particularly tedious to interact with large SMB shares when you don’t know exactly where the files you want are located.

The solution at the link is simple, if obscure: select the fruit object from the available VFS Objects under the Advanced configuration of the share in question. Thanks to Spiceworks user David_CSG for dropping a hint about vfs_fruit that led me to this solution.

FireWire Quibble

Share Button

I have a personal quibble: FireWire may be a dead product, but there are a lot of legacy devices out there (mostly in the audio world). The current-generation Thunderbolt–FireWire adapter is completely inadequate for these devices, for two reasons: 1) they’re an end-of-line device, meaning they don’t daisy chain, which makes them difficult to use with devices that have few TB ports and 2) they are limited by TB power delivery maximums to only 10W, which many FireWire legacy devices easily exceed when operating on bus power. As an example, I have a not-that-old FireWire audio interface that I’d like to run off bus power from my laptop, on the go. It draws 7.5W idle, but spikes over 10W during startup (charging capacitors, I’m sure). I can’t use it with the TB bus adapter, I need either DC power (dumb) or a second adapter (since like good FW devices this has two ports for daisy chaining). The DC power port went out a while back, so now I use an Original iPod FireWire charger on the second port to deliver enough power.

It would be nice if anyone offered a powered FireWire adapter that could deliver a lot of wattage for legacy devices.

InSecurity: Panera Bread Co.

Share Button

This is the first installment of a new segment titled InSecurity, covering consumer-relevant business and government security practices with an emphasis on their failures.

Each new week, it seems, brings a new corporate or government data breach or operational security failure to out awareness. This week is no exception. The failure this time, however, is particularly egregious: ever the course of eight months, Panera Bread Co. knowingly failed to protect sensitive customer data from unauthorized access. This data included, according to security researcher Dylan Houlihan who originally discovered the vulnerability, at least one web API endpoint which revealed “the full name, home address, email address, food/dietary preferences, username, phone number, birthday and last four digits of a saved credit card to be accessed in bulk for any user that had ever signed up for an account“. Equivalent vulnerabilities were eventually discovered across multiple Panera web properties.

Houlihan first reported the issue in August of 2017, reaching out to Panera’s Information Security Director, none other than Mike Gustavison. No stranger to “managing” wildly insecure corporate systems, Gustavison worked as “ISO – Sr. Director of Security Operations” at Equifax from 2009–2013. After eight months of repeated attempts to convince Panera of the severity of their security hole, Houlihan reached out to security industry expert Brain Krebs, whose influence was able to extract mainstream media coverage and a formal PR statement from Panera. Incredibly, and despite public statements to the contrary, Panera failed to secure the vulnerable API endpoints.

For a full explanation of the vulnerability and a timeline of events, reference the primary link.

On the Heritability of Intelligence

Share Button

In their 2013 article for Current Directions in Psychological Science, Tucker-Drob, Briley, and Harden propose a transactional model for the heritability of cognitive ability. The basis for this model is the well-documented biological phenomenon, the basis of phenotypic variation, the regulation of gene expression by environmental stimuli. The authors apply this finding to the characteristic of cognitive ability and support it using evidence from twin studies. They propose that, through self-selection of stimulating environments and experiences, individuals with inherited high cognitive ability cause the expression of more cognitive ability genes, which in turn feeds the selection of more stimulating environments. However, beyond this straightforward and reasonably supported model they also provide a guiding interpretation which places heavy emphasis on the significance of genetic heritability of high cognitive ability. They ignore shortcomings in the data and make claims which are both unnecessary and unsupported by the evidence. The most significant of these are presented, with rebuttal.

Having claimed that cognitive ability is undeniably a heritable trait (“no longer a question of serious scientific debate”) (Tucker-Drob, Briley, and Harden, 2013; 1), the authors propose that this trait is subject to the same regulatory forces of expression as affect everything from the production of digestive enzymes to the melanin content of the skin. Although undoubtedly a simplification, this claim seems to hold true. The evidence brought to support the heritability, on some level, of cognitive ability is reasonably robust. In fact, it follows naturally from the fundamental theoretical basis underpinning the whole diversity of individual characteristics: the regulation of gene expression. As a trait which varies between individuals, changes over the lifetime, and is subject to heritable factors, it is reasonable (if not inevitable) that cognitive ability should have some form of expression regulation mechanism in the genome. This proposed mechanism is sound and well founded. The authors then propose that heritability of cognitive ability is environmentally influenced, and further that high cognitive ability is more heritable than low cognitive ability.

However, the shortcomings of their specific model begin to show as early as the first example. The authors discuss the average educational attainment of Norwegians, which increased during the 20th century due to social and regulatory changes favouring education, in the context of measured heritability of educational attainment increasing during roughly the same period. The educational attainment data reports numbers for 1960 vs. 2000, while data on the heritability thereof reports “before 1940” and after, while itself being published in 1985. Ignoring the obvious flaw of comparing temporally un-matched samples of population data, the example provides no evidence of what the authors claim: educational attainment is not a measure of cognitive ability, but of the strength of social norms and reach of educational programs. Given Norway’s system of public education (10 years compulsory, 3 years ‘optional’ in name only, 3–8 years optional university level), the c. 2000 attainment of 11.86 years average demonstrates that 91% of individuals complete the first 13 years. Completion of compulsory public education is not a matter of individual aptitude or achievement, but of social norms and education system policy. Nine in ten completion of compulsory education is not indicative of high cognitive ability, and the data on heritability of cognitive ability are too narrow to offer a comparison.

The next section of the piece, which introduces the underlying framework of “gene-environment correlation” before explaining the authors’ proposed transactional model of cognitive ability specifically, serves as evidence against the very application which the authors make. They put it simply: “a broad array of presumably ‘environmental’ experiences—such as negative life events, relationships with parents, and experiences with peers—are themselves heritable” (Tucker-Drob, Briley, and Harden, 2013; 2). Clearly a broad range of experiences and qualities can be inherited. While the authors propose this to support the heritability of cognitive ability, it just as easily supports the heritability of academic inclination, curiosity, and even academic performance, factors which are often measured as indicators of cognitive ability. In fact, these very listed examples of heritable experiences are known to influence cognition themselves, as are other factors which fit the same category, such as parent attachment (Fryers & Brugha, 2013; 9). This gets into the base of the authors proposed model: that gene-environment correlation acts in concert with gene expression regulation to promote the achievement of cognitive ability potential in individuals with a genetic disposition for it. In particular, the authors posit that the natural outcome of this process is as observed: individuals with high cognitive ability are likely to have inherited it. In fact their evidence shows that, rather than cognitive ability itself being heritable, the indicators often measured for cognitive ability are heritable as are many influences on the development of cognitive ability.

Further complicating the authors’ model is their poor handling of shortcomings in their base evidence, twin studies. Due to restrictions on data retention and access, international adoption, research funding, and scope, the majority of twin studies showing heritability of cognitive ability sample from within the same country, with the same social norms, educational policy, and even similar socioeconomic context. The authors, however, insist that socioeconomic status is a predictor of both the heritability and level of cognitive ability. This is faulty: if socioeconomic status predicts cognitive ability and socioeconomic status is itself significantly heritable, then it will appear that socioeconomic status influences heritability of cognitive ability. This is demonstrated in the cited data, which break down outside of the US. In particular in social democracies such as Sweden (and Norway) where, the authors admit, strong social programs even the playing field and give all individuals of diverse backgrounds access to the same pool of potential environmental stimuli to select. In these states, it seems that the socioeconomic factors typically having a deleterious effect on  educational access and achievement (and thus, cognitive ability) are substantially reduced. As a result, the heritability of cognitive ability is low. This indicates that the authors’ model of positive experience selection for cognitive ability is fundamentally flawed: a socioeconomically- and gene-environment-linked deficit model better explains inheritance of cognitive ability. Instead of cognitive ability itself being inherited, it is a positive environment, with access to stimulating and diverse education and other experiences (among many factors), which is inherited. It is the absence of this environment and various opportunities of experience which reduces an individual’s achieved cognitive ability. This is clearly shown in the countries where their model does not hold, which have strong social programs.

The trouble with the authors’ model is not that it is based on a flawed mechanism, but that its predictions do not hold. There is no reason that high heritability of cognitive ability should correlate with high cognitive ability itself, absent the other factors. Their model provides no explanation for this claim, yet it is a clear component of their position. The authors themselves have set up evidence and arguments which can be used to make a completely different point: cognitive ability shows high heritability because environmental factors that influence it—such as social norms around education, parenting style, household stability, and academic aptitude and drive—are themselves heritable.

Although the authors identify and describe a compelling mechanism for the influence of environment on cognitive ability, they seem to mistake the causal direction of its operation in their interpretation. Rather than an abundance of stimulating experiences and environments improving an individual’s actualisation of their inherited potential, as is implied, a more parsimonious explanation is that a shortage of these influences suppresses achievement of genetic potential. Their base assumption is made clear on p.3: “The ‘end state’ of this transactional process—high levels of and high heritability of cognitive ability—is therefore expected to differ depending on the quality and availability of environmental experiences.” Emphasis here is on the the end state, particularly the high levels of cognitive ability. In other words, those who demonstrate the higher heritability of cognitive ability also demonstrate higher levels of that ability. High cognitive ability is more closely linked to parent performance than low cognitive ability. If this end state were high heritability of cognitive ability alone, then their argument would hold. However, the association of “high levels of” cognitive ability in this outcome does not follow from their model, and indicates that the observations could be better explained as a default outcome; Ockham’s Razor is well applied. This critical flaw in their proposal is laid bare in their only addressed counter-argument, that their model breaks down in countries with stronger education systems and social programs. Rather than individuals’ ability to self-select unique experiences reinforcing a pattern of gene expression and experience selection to cyclically maximise expression of inherited cognitive ability, it is a much simpler explanation that instead a paucity of these experiences suppresses the expression of inherited cognitive ability.


Fryers, T., & Brugha, T. (2013). Childhood Determinants of Adult Psychiatric Disorder. Clinical Practice and Epidemiology in Mental Health : CP & EMH, 9, 1–50.

Tucker-Drob, E. M., Briley, D. A., & Harden, K. P. (2013). Genetic and Environmental Influences on Cognition Across Development and Context. Current Directions in Psychological Science, 22(5), 349–355.

A Mac Pro Proposal

Share Button

Since the Future of the Mac Pro roundtable two weeks ago, there’s been a lot of chatter in the Pro user community analyzing  the whole situation. Marco Arment has a pretty good overview of people’s reactions in which he makes a strong case for the value of the Mac Pro as a top-of-the-lineup catchall for user needs that just can’t be met by any other Apple hardware offering. In general, Pro users seem fairly optimistic. This is a rare case of Apple engaging with its most vocal users outside of the formal product release and support cycle, and people seem to recognize its value.

However, although many commenters have variously strong opinions about their own needs (and speaking from experience is critical to help Apple understand the diverse use cases of people who buy Mac Pros), there hasn’t been a lot published on how exactly Apple could address these user needs in real products. Drawing on the specific and general demands of various Pro users, and pulling together syntheses breaking down what main categories of users have what needs, I have a proposal for the new Mac Pro technical features and product lineup. (And it’s not a concept design rendering; I’ll leave the industrial design up to Apple.) Fair warning: this is a moderately lengthy and technical discussion.

The most important thing to understand about Pro users is, as Marco Arment explains, their diversity. Pro users need more than any other device in the consumer Mac lineup can offer, but what exactly they need more of varies greatly. Since it’s such a good breakdown, I’ll quote him:

  • Video creators need as many CPU cores as possible, one or two very fast GPUs with support for cutting-edge video output resolutions (like 8K today), PCIe capture, massive amounts of storage, and the most external peripheral bandwidth possible.
  • Audio creators need fast single-core CPU performance, low-latency PCIe/Thunderbolt interfaces, rock-solid USB buses, ECC RAM for stability, and reliable silence regardless of load. (Many also use the optical audio inputs and outputs, and would appreciate the return of the line-in jack.)
  • Photographers need tons of CPU cores, tons of storage, a lot of RAM, and the biggest and best single displays.
  • Software developers, which Federighi called out in the briefing this month as possibly the largest part of Apple’s “pro” audience, need tons of CPU cores, the fastest storage possible, tons of RAM, tons of USB ports, and multiple big displays, but hardly any GPU power — unless they’re developing games or VR, in which case, they need the most GPU power possible.
  • Mac gamers need a high-speed/low-core-count CPU, the best single gaming GPU possible, and VR hardware support.
  • Budget-conscious PC builders need as many PC-standard components and interfaces as possible to maximize potential for upgrades, repairs, and expansion down the road.
  • And more, and more

I translated this to a table, for clarity. First, a caveat: “software developers” refers to general consumer software. Software developers who work in the other listed fields have all of the same requirements as that field in addition to the general requirements: a game developer needs a good gaming GPU, a video tools developer needs lots of cores and dual GPUs, etc.:

Pro Type CPU GPU RAM Expansion
Video High Core Count Dual GPU Max Lots
Audio Fast Single Core Low Max ECC Some
Photography High Core Count Low Lots Little
Software High Core Count Low Lots None
Games Fast Single Core Fast Single GPU Lots None

This is a lot clearer, and we can see some trends. For CPU requirements, Pros generally either need fast single core performance or good multi-core performance. For GPU requirements, Pros generally either need fast single GPU performance or good dual-GPU performance. Everyone needs a lot of RAM; some need ECC RAM. Some need a lot of expansion, and others need none.

The best divider for Pro users appears to be around CPU: either fast single core CPU or high core count CPU needs. GPU needs are more variable, pretty much everyone needs a lot of RAM, and a few users need chassis expandability. The most demanding overall users are video editors, who need not only lots of CPU cores, dual GPUs, and huge amounts of RAM (128GB, if not more), but also demand a lot of internal hardware expansion for video interface/capture cards, audio interface/capture cards, and data communication cards (fiber channel). I say “internal expansion” because the reality for these users is that 1) their hardware is niche, expensive, and slow-moving thus requiring PCIe form factor support; 2) Thunderbolt 3 and other protocol adoption is slow in the industry and not available natively or just not workable in external enclosures for many applications; and 3) having stacks of external devices and enclosures, on top of other specialized hardware, is unwieldy, expensive, and ugly.

There are some other requirements that most Pro commenters noted as well:

  • Thunderbolt 3 is great, but not everyone needs as many ports. A couple folks have noted a desire for 8 Thunderbolt 3 ports at full speed, but this takes up valuable bus bandwidth and eats into PCIe capacity, so 4x might be offered as well.
  • Even accepting Thunderbolt as the future, and having a bunch of USB-C/Thunderbolt 3 full compatibility ports, there are still many USB-A devices in the world. Some of these are legacy hardware that doesn’t play nice with USB adapters and hubs. So a couple USB-A legacy ports would be nice. Speaking of USB-C, all provided Thunderbolt ports should support full Thunderbolt 3/USB-C functionality.
  • Whisper quiet operation is assumed. Besides audio applications where it is a necessity, nobody likes having a jet take off near their workspace. The fastest single GPU applications can accept a little fan noise, but it should be limited as much as possible. Whether this takes the form of more clever thermal management or liquid cooling makes no difference as long as it doesn’t restrict the thermal footprint of the device. Thermal capacity restriction is one of the primary failings of the 2013 design, identified by Pro users, engineers, and recognized by Apple itself.
  • Nobody cares about having an optical drive (the 2013 design had this right), but if there is one it damn well better be an industry standard component and support Blu-Ray without janky third-party stuff. The inclusion of really excellent DVD decoding and playing software in the Mac was a big deal, and the lack of this software for Blu-Ray is making the format a massive PITA for professionals and consumers alike. Physical media may be dying, but it’s not dead yet. A nice solution here would be a refresh to the SuperDrive: update it to TB-3/USB-C and make it read and write BluRay (or make two: one that reads and writes CD/DVD and one that reads and writes Blu-Ray in addition).
  • Likewise, space for 3.5″ spinning platter hard drives is not important. The 2013 design made the right call on this: they’re too slow for Pro use, and there’s no need to have large/archival storage capacity within the device as long as there’s enough SSD scratch space for active projects.
  • The audio output combo jack (optical + 3.5mm TRS) is brilliant and convenient. However, many users valued the dedicated audio input jack and would like it back. Make it also a combo jack with optical and 3.5mm TRS, and support line and mic level input. This doesn’t have to be a balanced XLR connector with phantom power, just a mic/line/optical combo jack.
  • Finally, and perhaps most critically: all BTO components should be industry standard and third-party compatible. This means processor(s), GPU(s), RAM, SSDs, and standard PCIe cards. Being stuck with whatever BTO configuration they initially order is a huge, deal-breaking inconvenience for many self-employed Pro users. It’s insulting to their knowledge of their own unique product needs and comes off as absurdly greedy. Not only is the Mac market a small fraction of Apple’s revenue, the top end Pro segment is absolutely miniscule. Nickel-and-diming Pro users by charging exorbitant rates for high end BTO configurations, Apple-certified-exclusive hardware upgrades, and incompatible parts lock-in is nothing less than stupid and comes off as incredibly arrogant. Pros are willing to pay a premium for premium components, but not for huge markups on things they won’t even use in their industry (like audio pros saddled with useless dual GPUs).
  • Besides the consumer-facing arguments for using standard components, there’s also a strong technical argument to be made: a big part of the 2013 Mac Pro’s stagnation has been a lack of consistent updates from Apple, and the complete inability for third parties to fill this void. Using industry standard components makes it easier for Apple to offer consistent product updates to the latest industry offerings (requiring less R&D on building custom components), and for consumers to upgrade their existing devices as the technology advances. This is to everyone’s benefit.

Finally, I’m not here to design enclosures, only to outline purely technical requirements. How Apple chooses to package any such device(s) I am deliberately leaving open ended. I think that the 2013 Mac Pro enclosure redesign was brilliant, aesthetically and functionally. It failed not in the eyes of the Pro community because of its appearance, size, or clever design, but in their workflows and environments because it did not meet their diverse technical needs. Any new device does not have to be a return to the iconic “cheese grater” tower, but it needs to address the technical needs that I identified above.

Perhaps most of all, it needs to make products like Sonnet’s xMac Pro Server enclosure unnecessary for non-enterprise users. While such a product fine for the datacenter and for server applications (I’m not going to go into the demand for proper Mac server hardware here), the fact that a $1,500 enclosure is the most convenient way to get back functionality that came standard in the previous generation device is obscene. I’m referring, of course, to user upgradeable (and expandable) storage and PCIe card support. Even for that much it is inadequate for GPUs, since it only offers PCIe 2.0 slots. A rack form factor is not appropriate for a significant segment of Pro users, and requiring any unwieldy external enclosure for hardware expansion is ridiculous to the point of obscenity in the face of the entire rest of the desktop computer hardware market.

With these considerations in mind, I’ve come up with a model lineup that I believe provides the best balance between flexibility/capacity and reasonable excess: meeting the highly variable needs of the Pro market segment without making everyone buy the same super-in-every-way machine. I propose two motherboard-distinct configurations, which may or may not ship in the same enclosure; I don’t care, and it doesn’t matter to Pro users as long as their technical needs are met. Without further ado:

Configuration 1: Emphasis on Audio, Games, & Software

  • Single socket latest generation Intel processor. Configurable 4 to 8 cores.
  • 4x DDR4 ECC RAM slots. Configurable 16GB up to 512GB. Make the base configuration 1x16GB so users don’t have to toss their factory RAM to upgrade a little.
  • 2x PCIe 3.0 x16 dual-width standard slots. Configurable with choice of graphics card(s) (None, Single, Dual). Maybe one or two x8 slots as well.
  • 4x Thunderbolt 3/USB-C ports.

Configuration 2: Emphasis on Video, Photography, & Software

  • Dual socket latest generation Intel processors. Configurable with one or two sockets used, 4 (single socket) to 16 cores (dual socket).
  • 8x DDR4 ECC RAM slots. Configurable 16GB up to 1024GB. 1x16GB min. configuration, again.
  • 4x PCIe 3.0 x16 dual width standard slots. Configurable with choice of graphics card(s) (None, Single, Dual). Maybe two or four x8 slots as well.
  • 8x Thunderbolt 3/USB-C ports.

Offering two distinct motherboards with two distinct levels of capability provides the kind of configuration flexibility that has been identified as crucial for the Pro device market. Audio editors aren’t locked into a dual-socket, dual-GPU device that they won’t be able to take advantage of; video editors can get the dual GPU capabilities they need; and VR and Games developers can get high temperature single GPUs that are typical in their industries.

However, this is where the distinctions end. Offering too many differences between models is both too much R&D work, and also a recipe for crowding users into a configuration that will have either more of what they don’t need or less of what they do. I therefore propose that the devices be feature equivalent in the following ways:

  • Dual 10G-BaseT Ethernet. Enough professionals work in data-heavy environments that bandwidth for storage interfaces is a big issue. Dual-bonded 10Gb offers 20 gigabit throughput for a lot less cost than fiber channel, and offering it built in frees up a PCIe slot for Pros who would otherwise use fiber channel (or bonded 10GbE). Even at single 10Gb there are workflows in video, audio, and photo editing which are not possible with 1Gb. Users are becoming familiar with the speed experience of SSDs, and Pros need that speed in their networking too. Heck, USB 3.1 is 10Gb signaling. Many will only ever use one port, but providing two is the right choice.
  • Bluetooth 4.2/4.1/BLE/4.0/3.0/2.1+EDR and 802.11ac WiFi. No comment.
  • 4x M.2 PCIe NVMe SSD slots. Of course the slots could be used for other things, but the primary purpose is to access industry-standard SSDs, which the user can upgrade and expand. Although enterprise can stand shelling out big bucks for Apple SSDs BTO, nobody can stand having them soldered to the board or otherwise proprietary. Four slots should be sufficient; Pros with bigger data needs than that often have or would benefit from external RAID and NAS (especially with those 10GbE ports spec’d above). The working storage within a Pro device should be the fastest available, and of sufficient working capacity to allow video, audio, and other asset-heavy Pros to use the fastest local data storage for their project workspace. Larger capacity data storage (>4TB) is a niche market, and a strong argument can be made that these users’ needs for archival and reference storage are better met with outboard solutions, which are becoming financially and technically accessible. This is one of the driving justifications for including Dual 10GbE, to allow the fastest access to economical network-attached storage. These slots need to support RAID for drive redundancy and speed, including on boot drives. Folks also mention the value of having them for Time Machine. 8TB total capacity has been mentioned, and seems like a reasonable (if still quite expensive) upper bound. So the idea is that you get fast boot for the OS and a good chunk of fast scratch space if you’re working with large assets. Being able to have a complete local copy of large Premiere projects (or at least sufficiently large chunks of them) is, I’ve heard, invaluable.
  • 2x HDMI 2.1. HDMI 2.1 is shaping up as the future of high resolution display connection, and having HDMI for compatibility is necessary anyways. Of course support multichannel audio out via HDMI. I also understand that an Apple branded display is returning. Many Pros benefit from having dual displays, and offering an Apple branded display with HDMI connectivity would expand the market for such a device outside of the Mac sphere, and would also free up TB3 ports for other bandwidth-intensive peripherals. Apple displays are recognized among video and photography Pros as some of the best in the market, and are still widely used even by non-Mac users. It seems reasonable to offer them in both unadorned base HDMI configuration and slightly more expensive TB3 (with backside USB/hub) versions.
  • 2x USB-A 3.1. Put one of them somewhere easily accessible (for USB flash drives and convenience peripherals). Put one on the back for legacy devices for the handful of users who need them. USB-C/Thunderbolt 3 is the future of peripherals; Pros understand this, and adapters from USB-C to USB-A are cheap and not too inconvenient, so there’s no need to weigh down the device with USB-A legacy ports.
  • Dedicated Audio In/Audio Out optical/analog combo jacks, with line/mic support. Many people have lamented the loss of the dedicated line in jack. Making it an optical and line/mic combo would be fantastic. For day-to-day use, put a headset combo jack on the front panel/same as the USB-A legacy port: plugging headphones into the back of a device sucks, as does plugging in a USB flash drive.
  • SD card combo reader. This is a convenience feature for photography and videography professionals. It’s a concession to their trade, and should be placed conveniently on the front of the device if included. However, I understand if it’s not included.
  • RAM should be ECC. This is the Pro market, and enough folks would benefit to make it standard.
  • Also, RAM should be DDR4. There’s a few percent latency penalty, but the significant energy (aka heat) savings, access to higher capacity DIMMs, and modern processor support make it time for the switch. Although theoretically possible, there is no DDR3 module on the market sporting >16 GiB capacity, and in fact no consumer motherboard ever manufactured that could utilize such a module. There are, however, as much as 128GB DDR4 DIMMs on the market right now. They cost an arm and a leg, but they exist. Producing computers capable of utilizing these modules would increase demand, this increase their availability and decrease price.
  • PCIe card slots should support all industry standard devices, including GPUs. The devices’ provided GPUs should also use these slots. No weird form factors, inaccessible mounting locations, or failure to support double wide cards. The number of slots a user anticipates needing shall inform their choice of build; since GPU needs are so highly variable it is reasonable to offer variety.

This provides two primary configurations, which have distinct motherboards and distinct capacities. They’re similar enough to be housed in the same enclosure, or distinct enough to warrant different designs. Like I said, I’m not a designer, and whether the device meets the demands of Pro users comes down to whether it supports the technical features they need. It doesn’t matter if the PCIe slots are vertical, horizontal, sideways or backwards, or even if they’re mounted on opposite sides of the enclosure as long as they meet the technical and functional needs outlined above. Likewise for the RAM and all other components.

I was inspired to this approach by a comment from a moderator over at tonymacx86, which suggested Apple release a mini-ITX form factor device. It would be great for the Mac gaming community and many enthusiasts/independent pros to have an even smaller, cheaper “Pro” desktop. Here’s how such a device might look:

  • Single socket Intel processor. Configurable 2 or 4 cores.
  • 2x DDR4 ECC RAM slots. Configurable 16GB to 256GB. Make the base configuration 1x16GB so users don’t have to toss their factory RAM to upgrade a bit.
  • 1x PCIe 3.0 x16 dual-height slot. Configurable with choice of graphics card (None, Single). Maybe one /8 slot as well (or dual-purpose an M.2 slot).
  • 2x Thunderbolt 3/USB-C ports.

I imagine this device might sport dual 1GbE instead of dual 10GbE to lower the price point, and 2x M.2 slot instead of 4x. The latest generation of Gigabyte’s Gaming-oriented mini-ITX motherboard boasts many of these features.

However, the needs of the middle and high end of the Pro market would not be met by such a device, and offering only it and the higher spec outlined above is too much space between their capabilities. Obviously the higher spec device is a must, because if it doesn’t exist then those Pros are jumping ship for custom built PCs. So I consider the focus for development should go to the two builds described above. This third configuration/model is more of an “I wish” product.

Here’s a feature comparison matrix, referring to the three devices by their most-similar PC form-factor names.

mini-ITX Mid Tower Full Tower
PCIe 3.0 x16 Dual-Width Slots




GPUs Integrated, Single Integrated, Single, Dual Integrated, Single, Dual
CPU Sockets




CPU Cores (max)




RAM Slots




Max RAM 256GB 512GB 1024GB
Thunderbolt 3/USB-C




PCIe M.2 Slots




LAN Dual 1GbE Dual 10GbE Dual 10GbE
HDMI 2.1




Comments are open. If you’re a Pro Mac user, please let me know which configuration you would choose, what BTO options you would prefer, and what third-party hardware and peripherals you would connect. I’d like to get more direct requirements from actual users so I can better refine this proposal to meet everyone’s needs.

Crossflash IBM M1015 to LSI 9220-8i IT Mode for FreeNAS

Share Button

The IBM M1015 is a widely available LSI SAS2008-based RAID controller card. It is an extremely popular choice for home and enthusiast server builders, especially among FreeNAS users, for its low price point (~$60 US secondhand on eBay) and excellent performance.

In essence, it’s hardware equivalent to the LSI 9211-8i; officially, it’s the 9220-8i, sold to OEMs to be rebadged. Two SFF-8087 mini-SAS quad-channel SAS2/SATA3 ports, no cache, no battery backup. Cross-flash it to LSI generic firmware in IT mode, they say, and you get an excellent SATA III HBA on the cheap. Turns out that’s easier said than done, especially if you’re working with a recent consumer motherboard.

The comprehensive, only slightly dated, instructions are here. Ironically, I only found them after I had pieced together the procedure for myself.

At this point, FreeNAS 9.10 is compatible with version P20 firmware. User Spearfoot on the FreeNAS forums has a package containing the utilities and firmware files. I’ve also attached it to this post: m1015.


  • If your motherboard lacks an easily accessible EFI shell, use the one in rEFInd.
  • If you get the error “application not started from shell”, that’s an EFI shell version compatibility issue. Use the shell provided in the link.
  • “No LSI SAS adapters found!” from sas2flsh.exe indicates that likely IBM firmware is still present. Use megarec to erase it.
  • “ERROR: Failed to initialize PAL. Exiting Program.” means your motherboard is not compatible with the DOS sas2flsh. Use the EFI version.

Additional References:

  • GeekGoneOld on the FreeNAS forums has a quick guide: #18
  • And a useful reply in that same thread: #28
  • Redditor /u/PhyxsiusPrime describes the EFI shell compatibility workaround via rEFInd here.

OpenGL on iOS: Device Orientation Changes Without All The Stretching

Share Button

Once you get your first OpenGL view working on an iPhone or iPad, one of the first things you’ll likely notice is that when you rotate your device, the rendered image in that view stretches during the animated rotation of the display. This post explains why this happens, and what you can do to deal with it. Example code can be found at

Continue reading OpenGL on iOS: Device Orientation Changes Without All The Stretching

The Blog of Philip and Dakota Schneider