Accessibility: the last consideration of software developers

Accessibility is often the last consideration of software developers.

Not because we don’t want to do it – we would if we could.

It’s for one simple reason:

Building non-accessible software is already bloody difficult!

We’re taxed to the limit as it is. Accessibility is just another layer that, I’m sorry to say, means an 80+ work hour week instead of a 60 hour week.

And business – unless they are specifically designed for or required to provide accessibility – is not interested in paying for that extra consideration.

“Exposure” is not a valid form of currency

This comic from The Oatmeal says it all:

Exposure
Source: https://theoatmeal.com/comics/exposure

Sorry folks, but “exposure” is not a valid form of currency.

If someone is good enough to do the work for you,
and what they produce is good enough for you to use,
then it’s good enough to pay them with real currency.

Simple as that.

And a pro-tip for all you “creative” types (artist, designers, developers, engineers, and even trades peoples) – usually when someone says to you “it will be good for exposure”, they are the last person to spread the word offer referrals.
Ask me how I know.

Whether you have been doing your work for 2 months, 2 years or 20 years, your time, effort and skills are worth real money if someone is prepared to use what you produce.

The Best Practice Fallacy

I consider “Best Practice” in software development to be a fallacy.

Why?

Yesterday’s best practice is replaced by something new today.
And today’s best practice will be replace by something else tomorrow.

I don’t have a problem with setting good guidelines and habits, but let’s not call it “best” – that implies one right way (and there are enough knuckleheads in our industry who latch onto ideas with zeal that I really don’t want to encourage further).

Instead, let’s think of it as:

A “good” approach for what we are trying to achieve today.

Any way you cut it, any practice is just someone’s opinion of how things should be done, and it’s not necessarily based on varied experience or hard lessons.

In my own business I sometimes dictate how things should be done. A decision needs to be made, a pattern set in place and direction set. But I’m flexible and often review, improve and adjust.
(I also pay the bills so in absence of a better option what I say goes.)
But in no way are the decisions I make “best practice” or based on what others consider to be best.

I regularly make decisions contrary to current belief but are still valid and appropriate for the situation. I do analysis, consider options and put a lot of thought into decisions (other time there’s not much thought but a desire to learn through experimentation).

The reality is, in software there are very few things you need to adhere to. Create code and systems others can understand and maintain. Expect change. Don’t be an asshole.

Apart from that our industry is so young, so fast moving, and has so many possibilities and facets it’s impossible to define “best”.

So let’s just drop the bullshit, call a spade a spade, and admit we’re all learning and making this up as we go.

Asynchronous Programming is Like Caching

I tend to approach “asynchronous” programming as I do “caching”: that is, I prefer to touch neither.

While both are important for software performance, I also consider them “advanced techniques” and “optimisations” that, to be honest, I don’t think the average developer (including me) do all that well.
They require a non-traditional mindset, are difficult to learn, difficult to implement well (regardless of language or framework support), and as such, are easy to do poorly which increased the likelihood of errors and security holes.

I prefer to investigate more traditional synchronous and non-cache based approaches to solve problems before I go down these rabbit holes.

Programming and Software Development: It’s Really Hard!

Creating software is a difficult, time consuming, labour intensive process.

And it doesn’t matter how many tools, frameworks, patterns or processes you have, it will continue to be the case. In fact, throwing more of those into the development process will only make it harder and longer.

 

Why?

Requirements will change. Always. Without question. Because both the customer and developer cannot know what is best until they have something to use and test.

Changing requirements means refactoring. And refactoring means regression testing, to ensure something that previously worked still works.

Testing is needed to ensure what we create works as expected, and that takes time and careful consideration. Automated or manual – it doesn’t matter. Testing takes as much time (or more) as development. In fact, it’s actually more difficult than development because testers need to think of the things that programmers (and architects) didn’t.

The software needs to be secured against hacking and accidental “leaking” of data. And that’s a whole extra “non-functional” layer of development (similar to writing automated tests). What does that mean? There’s a huge amount of time, effort and code required to secure a software system that doesn’t actually contribute to the functionality of the problem being solved.

Usability needs to be considered. That is, considering if the software is “friendly” and usable by the customer. Hint: it’s probably not. This also needs testing. And often a few re-designs. Which in turn means refactoring, and more tests.

There’s also the fast changing pace of software technology. It is increasingly harder for software developers to keep up with changes in languages, tools, platforms and frameworks. This may mean 3rd-party software used to “assist” in building the software could be obsolete before the project is even complete. It can also lead to “analysis paralysis”, where there are so many options to choose from the developer can’t make a decision.
This churn of technology, often driven by a changing security landscape as well as cycles of so-called “best practice”, also leads to a continual need to maintain and update the software, again requiring more testing.

 

What can we do about it?

Nothing. At this time, in April 2019, I don’t see a solution to simplifying the process or decreasing development time. In fact, I believe the situation is going to worsen before it gets better.

The one thing we can do is acknowledge the situation.

Software development takes a lot of time and effort.
Revision is necessary.
Testing is necessary.
Non-functional code is necessary.

This is the reality, whether you want to believe it or not. So let’s accept it, estimate and quote for it, and educate our customers to the consequences of ignoring it.

Software Patterns & Best Practice: What if the authors are wrong?

I take the following view:

Software patterns and “best practice” are just the opinions of people who managed to get published.

Who says they are right?
Why are they right?
Are they right?

Because in my considerable experience there is no such thing as “best practice”:

You do the best you can, with what you have, under the circumstances at the time.

As for software patterns…. pfft!
I’ve tried reading “Design Patterns: Elements of Reusable Object-Oriented Software” and “Patterns of Enterprise Application Architecture” and never made it more than a quarter of the way through. (I think I made it further in War and Peace when I was a teenager.)
Then there’s SOLID and REST APIs.
(REST was a f**king thesis for goodness sake!)

Some people blindly follow the so-call best practices of software but forget they are just the opinions of a handful – literally a handful – of people who manage to get noticed or get a book deal.
And in no way do they reflect reality.

Reality is “average” people try to figure out the best, fastest and easiest way to implement software for business requirements that won’t get them fired.

It’s 2019 and today we’re all supposed to be doing DevOps, Agile, writing unit tests, performing automated testing, using non-SQL databases, in the Cloud, following OWASP Top 10, logging and auditing everything and making no mistakes.
(There are major companies in my city with names on the top of skyscrapers asking university students to teach them about DevOps pipelines.)

The reality is it’s no better than 15 years ago.
We still haven’t solved the “low hanging fruit” problems.

Every day brings a new data breach. Automated or unit testing is not in the budget. Agile just means standing in a circle telling your team what your working on each morning (if your team lead gets their shit together). Many businesses are still running Exchange, Active Directory and databases on-site. Windows 7 is still live in Enterprise. Testing is more of a pain-in-the-ass than ever, Selenium still sucks, and SQL databases still rules.
(Also jQuery isn’t going anywhere, and if Google or Facebook were really ‘that good’, why are there so many versions of React and Angular where they’re still trying to get it right?)

Plus there is still someone running a PC under their desk in a small-to-mid sized business with the all important Excel file/Access database/ASP website that the business lives-or-dies on.

Just because the most vocal people in the industry say “it should be” doesn’t make it so.
Businesses still pays the bills, and the priorities of businesses are not the “best practice” of software development.
Businesses don’t give a shit about “software best practice”.
That doesn’t make it right – but it is reality.

Right now I can’t recite to you a single definition of SOLID, or tell you definitively how any software pattern is supposed to work.
But I’ve dedicated over 50,000 hours of my life to creating software and solving business problems with software, and I can tell you this for certain: it doesn’t matter.

What really matters is simple:

  • Keep solutions and code simple.
  • Always think about security before anything else. (Then actually make your fucking systems secure!)
  • Make software usable and understandable for users.
  • Create readable code for yourself and your fellow software developers (if it can be read by a junior developer you’ve done your job).
  • Write documentation. Code is NOT self-documenting – it requires documentation and comments.
  • Make sure the software can be maintained and changed easily.
  • Log changes and events. It doesn’t matter how – just make sure you start logging from day 1.
  • Watch and monitor the systems you build.
  • Remember: You are not the user. How you think it will be used is almost certainly wrong or not enough.
  • If you find yourself fighting your tools or swearing at your screen, you’re doing it wrong.
  • Bonus: Command-line requires you to remember command, and more manual work. GUIs do not. Create GUIs. Command-line sucks.

And any estimate you give, multiply it by 3 first.

Finally, consider this:

The only times I’ve ever discussed or had to describe software patterns is in highly technical job interviews.

What does that tell you?
I talk to a lot of developers, on a lot of topics, so why haven’t we had “software pattern” conversations in passing?
I would welcome it! If only to learn how other developers have implemented different software patterns (successfully or otherwise).

In reality I think we all work with different patters, we just don’t think about it at the time, and frankly who gives a shit – we’re doing our job. Do we really need to say “hey, I just implemented the observer patter”? (I had to look that up to give you an example).

I’ve spent almost 20 years reading and and modifying other people’s code. I’m an expert debugger in my field (CSS is my favourite, and raw JavaScript is my friend).
I don’t care what patterns people use (because what’s the “right” or “wrong” pattern anyway?)
What I care about is being able to quickly and easily read and understand code when I have to fix a problem that is causing difficulties for a customer or employer.

Software patterns and “best practice” don’t mean shit.

What matters are:

  • Business continuity
  • Security
  • Usability
  • Readability and Maintenance
  • Extensibility

Everything else is a side issue.

Being able to identify a problem doesn’t mean I have an answer

I’m a big proponent of asking questions. But one questions I hate is:

What’s the solution?

It’s a loaded question, intended to deflect.

The scenario where we most hear this is when someone says “X is a problem” to a creator, leader or manager of X.
Then the leader immediately replies: “Great. What’s the solution? How do we fix it?”

Similarly, some leaders will say: “Don’t just raise problems. Suggest solutions.”

This is a pure bullshit tactic by leaders to deflect the problem back on the person reporting it.

I look at it like this:

Just because a person can see a problem, doesn’t mean they have the vision to understand the cause or how to solve it.

Likewise, they probably do not have the vision to create the thing that caused the problem in the first place.

Humans are good at spotting problems. We’re not that great at finding solutions.

Yes, there are times when the person identifying the problem can offer a solution. These are times where their expertise is what allows them to see the problem in the first place.

But working in business, many times I’ve heard someone raise a concern that is immediately quashed by the “how do we solve it” question, the underlying subtext being “if don’t have an answer, don’t ask the question”.

Rookie Mistakes

I was listening to a security podcast and someone of considerable industry experience said (of something they just did):

“That’s a rookie mistake”.

No, it’s not.

It’s just a “mistake”.

To say it’s a “rookie” mistake is being disingenuous to rookies, particularly when the experienced person is still making that same mistake.

So, by virtual of a fact it is an easy mistake for someone of experience to make, that makes it just a plain old “easy” mistake.

Cost is not universal (when Pluralsight becomes expensive)

Pluralsight is a great learning resource for anyone in IT or looking to learn something IT related. I’ve had an annual subscription for a few years, and now I have a second account for my development staff.

What gets me though is the cost, and right now I’m walking a very fine line between value and expense.

One one hand it’s a highly valuable resource with wide ranging, quality material. On the other hand, I (and my developers) struggle to find time to use it.

It can also be argued it’s reasonably priced for the volume of material offered – USD$35/month or $299/year. With enough time you can gain expert knowledge on almost any topic.

But… the value of a US Dollar isn’t universal.
At the moment in Australia USD$1 is about AUD$1.39.
In India it’s about INR 71.

For me, an annual subscription actually costs AUD$416. Right now I’m self-employed and this year I’m projecting about $75,000 income (including tax and superannuation). That leaves me with about $54,000 in the pocket.
So all up the subscription is about 0.77% of my cleared income, out of my pocket. Still not looking too bad given the value it provides (if I have time to use it), though it is starting to hurt.
But the point is: An actual $416 doesn’t look as shiny as the $299 sticker price on the site.

Let’s look at a greater contrast of price inequality.
USD$299 comes out to about INRĀ 21,229 in India.
To put that in perspective a well paid mid-senior developer might get INR 22,000 per month salary.
In other words, in India an annual Pluralsight subscription can cost a developer one month’s salary.
Suddenly that convenient 300-minus-1 price isn’t looking so shiny.

The point is, cost is not universal, and that’s something we can easily forget.
Different countries. Different stations in life. Different situations and costs of living – the value of a dollar varies widely.

While I still believe the quality of content on Pluralsight is high, I am constantly looking for a other services with competitive offerings and a better price point.
I work hard for my money. Only a mug would give away more than they need to.