Command-line editors: there is no debate

It’s 2019.

Why the heck are we still having the vim vs emacs vs other command-line vs basic-text-editor debate for writing code?!

Don’t use either!

There are editors, IDEs and tooling today for a far superior writing, debugging and supported experience.

It is NOT “cool” or “good experience” to write in command-line editors. That’s like a modern mechanic having a Model T Ford in the workshop to use a reference for repairing a Tesla.

IT people and software developers are here to serve businesses and consumers in the best way possible, not scratch around in an old sandpit. We even have modern tools that give us a superior experience for FREE (I’m looking at you Visual Studio Code).

I certainly won’t hire someone if their tool of choice is command-line based or they use “grep” in conversation. That’s a time and financial loss waiting to happen.

The Cloud (is not the rose that it seems)

The Cloud doesn’t remove the need for infrastructure and operations people. Nor does it particularly make infrastructure easier.

It simply shifts where the hardware runs and who owns it.

And for small software teams it still means the developers and project manager are “the” operations and infrastructure people.

The big difference now is The Cloud makes it much easier to accidentally spend vast amounts of money on infrastructure you had no idea you deployed (or who deployed it), where you struggle to understand what it does or why you need it, and you can never seem to figure out how it is charged on your bill.

I know this because I live it.

 

(Also published on LinkedIn)

The difference between old media and social media

There are two clear differences between almost all published media that has come before and the social media we know now.

1. Available audience
2. Cost

Not cost to the consumer.

I’m about talking cost to the publisher – the person who wants to distribute their content.

In the past there has always been a greater cost to produce and distribute content, whether it in time, effort, materials or straight up money to get others to do work.
There were higher barrier to publishing and distribution: paper, printing, physical distribution of real media, a limited audience who would see the material. It was not readily scalable, nor was it cheap to scale.

That cost generally translated to a need in higher quality content, because the producer had to maximise on their return.
By that I mean: they couldn’t afford to produce shit no once would waste their time on, whether they were paying or not.
They needed maximum return on their money spent, for what was generally an expensive outlay to spread their content.

In addition, and partly due to the cost of media a type of distribution, the audience was limited.
Distribution was limited, harder, and bound my the physical medium.
Again, the producer needed to ensure maximum adoption of their content.

But with the Internet and social media that has all changed.

Social platforms allow “free” and easy distribution of content.
They also allow a “free” and easy inclusion of a massive audience.
That’s an incredibly low barrier of entry on both sides of the equations.
Combine that with increasingly easier ways to create content – written, audio and video – through readily available consume tools, and an abundance of cheap add-on services offered via outsourcing, and we have now have social platforms open to publish almost limitless amounts of content, of every quality, at almost zero cost to both the publisher and consumer.

But there is a cost.

Some may say it’s distraction. Some say privacy. And others see pathways for disinformation.
All are true to varying degrees.

What is see as the ultimate cost to the average person is:

Time and attention.

Because every piece of content generated and distributed on the social platforms is calling for our attention. And it take away our precious time, to both consider and digest, regardless of value.
People are hungry for it (for whatever their individual reason).
And worst of all, in the world of social media:

There are no editors.
No one to check for quality or facts.
No one to curate the chaff from the wheat.

Our time – our precious, limited time alive – is second-by-second being consume by other people’s shit.
Shit that is now without the barrier of economics that anyone can produce and distribute things to suck our time.
Yes, we get a momentary hit of joy or a hint of interest, but how often do we get value that actually enriches us or is usable?

I wonder: what if the social networks flipped the model and started charging for content to be published? How will the world look in 10 years time?

Accessibility: the last consideration of software developers

Accessibility is often the last consideration of software developers.

Not because we don’t want to do it – we would if we could.

It’s for one simple reason:

Building non-accessible software is already bloody difficult!

We’re taxed to the limit as it is. Accessibility is just another layer that, I’m sorry to say, means an 80+ work hour week instead of a 60 hour week.

And business – unless they are specifically designed for or required to provide accessibility – is not interested in paying for that extra consideration.

“Exposure” is not a valid form of currency

This comic from The Oatmeal says it all:

Exposure
Source: https://theoatmeal.com/comics/exposure

Sorry folks, but “exposure” is not a valid form of currency.

If someone is good enough to do the work for you,
and what they produce is good enough for you to use,
then it’s good enough to pay them with real currency.

Simple as that.

And a pro-tip for all you “creative” types (artist, designers, developers, engineers, and even trades peoples) – usually when someone says to you “it will be good for exposure”, they are the last person to spread the word offer referrals.
Ask me how I know.

Whether you have been doing your work for 2 months, 2 years or 20 years, your time, effort and skills are worth real money if someone is prepared to use what you produce.

The Best Practice Fallacy

I consider “Best Practice” in software development to be a fallacy.

Why?

Yesterday’s best practice is replaced by something new today.
And today’s best practice will be replace by something else tomorrow.

I don’t have a problem with setting good guidelines and habits, but let’s not call it “best” – that implies one right way (and there are enough knuckleheads in our industry who latch onto ideas with zeal that I really don’t want to encourage further).

Instead, let’s think of it as:

A “good” approach for what we are trying to achieve today.

Any way you cut it, any practice is just someone’s opinion of how things should be done, and it’s not necessarily based on varied experience or hard lessons.

In my own business I sometimes dictate how things should be done. A decision needs to be made, a pattern set in place and direction set. But I’m flexible and often review, improve and adjust.
(I also pay the bills so in absence of a better option what I say goes.)
But in no way are the decisions I make “best practice” or based on what others consider to be best.

I regularly make decisions contrary to current belief but are still valid and appropriate for the situation. I do analysis, consider options and put a lot of thought into decisions (other time there’s not much thought but a desire to learn through experimentation).

The reality is, in software there are very few things you need to adhere to. Create code and systems others can understand and maintain. Expect change. Don’t be an asshole.

Apart from that our industry is so young, so fast moving, and has so many possibilities and facets it’s impossible to define “best”.

So let’s just drop the bullshit, call a spade a spade, and admit we’re all learning and making this up as we go.

Asynchronous Programming is Like Caching

Update: 04 March 2020

My viewpoint slickly relaxed, though I still hold true that asynchronous programming is damn difficult to mentally implement.

What I originally neglected to identify is “why”. And I think the reason is simple:

People don’t intuitively think asynchronously.

Likewise, we don’t think in a caching manner.

And the core problem behind both is we have to simultaneously monitor and make decisions on actions that may or may not happen, at a time we can’t determine, and then proceed on a path [that would have been synchronous] to complete a task.


 

I tend to approach “asynchronous” programming as I do “caching”: that is, I prefer to touch neither.

While both are important for software performance, I also consider them “advanced techniques” and “optimisations” that, to be honest, I don’t think the average developer (including me) do all that well.
They require a non-traditional mindset, are difficult to learn, difficult to implement well (regardless of language or framework support), and as such, are easy to do poorly which increased the likelihood of errors and security holes.

I prefer to investigate more traditional synchronous and non-cache based approaches to solve problems before I go down these rabbit holes.

Programming and Software Development: It’s Really Hard!

Creating software is a difficult, time consuming, labour intensive process.

And it doesn’t matter how many tools, frameworks, patterns or processes you have, it will continue to be the case. In fact, throwing more of those into the development process will only make it harder and longer.

 

Why?

Requirements will change. Always. Without question. Because both the customer and developer cannot know what is best until they have something to use and test.

Changing requirements means refactoring. And refactoring means regression testing, to ensure something that previously worked still works.

Testing is needed to ensure what we create works as expected, and that takes time and careful consideration. Automated or manual – it doesn’t matter. Testing takes as much time (or more) as development. In fact, it’s actually more difficult than development because testers need to think of the things that programmers (and architects) didn’t.

The software needs to be secured against hacking and accidental “leaking” of data. And that’s a whole extra “non-functional” layer of development (similar to writing automated tests). What does that mean? There’s a huge amount of time, effort and code required to secure a software system that doesn’t actually contribute to the functionality of the problem being solved.

Usability needs to be considered. That is, considering if the software is “friendly” and usable by the customer. Hint: it’s probably not. This also needs testing. And often a few re-designs. Which in turn means refactoring, and more tests.

There’s also the fast changing pace of software technology. It is increasingly harder for software developers to keep up with changes in languages, tools, platforms and frameworks. This may mean 3rd-party software used to “assist” in building the software could be obsolete before the project is even complete. It can also lead to “analysis paralysis”, where there are so many options to choose from the developer can’t make a decision.
This churn of technology, often driven by a changing security landscape as well as cycles of so-called “best practice”, also leads to a continual need to maintain and update the software, again requiring more testing.

 

What can we do about it?

Nothing. At this time, in April 2019, I don’t see a solution to simplifying the process or decreasing development time. In fact, I believe the situation is going to worsen before it gets better.

The one thing we can do is acknowledge the situation.

Software development takes a lot of time and effort.
Revision is necessary.
Testing is necessary.
Non-functional code is necessary.

This is the reality, whether you want to believe it or not. So let’s accept it, estimate and quote for it, and educate our customers to the consequences of ignoring it.

Software Patterns & Best Practice: What if the authors are wrong?

I take the following view:

Software patterns and “best practice” are just the opinions of people who managed to get published.

Who says they are right?
Why are they right?
Are they right?

Because in my considerable experience there is no such thing as “best practice”:

You do the best you can, with what you have, under the circumstances at the time.

As for software patterns…. pfft!
I’ve tried reading “Design Patterns: Elements of Reusable Object-Oriented Software” and “Patterns of Enterprise Application Architecture” and never made it more than a quarter of the way through. (I think I made it further in War and Peace when I was a teenager.)
Then there’s SOLID and REST APIs.
(REST was a f**king thesis for goodness sake!)

Some people blindly follow the so-call best practices of software but forget they are just the opinions of a handful – literally a handful – of people who manage to get noticed or get a book deal.
And in no way do they reflect reality.

Reality is “average” people try to figure out the best, fastest and easiest way to implement software for business requirements that won’t get them fired.

It’s 2019 and today we’re all supposed to be doing DevOps, Agile, writing unit tests, performing automated testing, using non-SQL databases, in the Cloud, following OWASP Top 10, logging and auditing everything and making no mistakes.
(There are major companies in my city with names on the top of skyscrapers asking university students to teach them about DevOps pipelines.)

The reality is it’s no better than 15 years ago.
We still haven’t solved the “low hanging fruit” problems.

Every day brings a new data breach. Automated or unit testing is not in the budget. Agile just means standing in a circle telling your team what your working on each morning (if your team lead gets their shit together). Many businesses are still running Exchange, Active Directory and databases on-site. Windows 7 is still live in Enterprise. Testing is more of a pain-in-the-ass than ever, Selenium still sucks, and SQL databases still rules.
(Also jQuery isn’t going anywhere, and if Google or Facebook were really ‘that good’, why are there so many versions of React and Angular where they’re still trying to get it right?)

Plus there is still someone running a PC under their desk in a small-to-mid sized business with the all important Excel file/Access database/ASP website that the business lives-or-dies on.

Just because the most vocal people in the industry say “it should be” doesn’t make it so.
Businesses still pays the bills, and the priorities of businesses are not the “best practice” of software development.
Businesses don’t give a shit about “software best practice”.
That doesn’t make it right – but it is reality.

Right now I can’t recite to you a single definition of SOLID, or tell you definitively how any software pattern is supposed to work.
But I’ve dedicated over 50,000 hours of my life to creating software and solving business problems with software, and I can tell you this for certain: it doesn’t matter.

What really matters is simple:

  • Keep solutions and code simple.
  • Always think about security before anything else. (Then actually make your fucking systems secure!)
  • Make software usable and understandable for users.
  • Create readable code for yourself and your fellow software developers (if it can be read by a junior developer you’ve done your job).
  • Write documentation. Code is NOT self-documenting – it requires documentation and comments.
  • Make sure the software can be maintained and changed easily.
  • Log changes and events. It doesn’t matter how – just make sure you start logging from day 1.
  • Watch and monitor the systems you build.
  • Remember: You are not the user. How you think it will be used is almost certainly wrong or not enough.
  • If you find yourself fighting your tools or swearing at your screen, you’re doing it wrong.
  • Bonus: Command-line requires you to remember command, and more manual work. GUIs do not. Create GUIs. Command-line sucks.

And any estimate you give, multiply it by 3 first.

Finally, consider this:

The only times I’ve ever discussed or had to describe software patterns is in highly technical job interviews.

What does that tell you?
I talk to a lot of developers, on a lot of topics, so why haven’t we had “software pattern” conversations in passing?
I would welcome it! If only to learn how other developers have implemented different software patterns (successfully or otherwise).

In reality I think we all work with different patters, we just don’t think about it at the time, and frankly who gives a shit – we’re doing our job. Do we really need to say “hey, I just implemented the observer patter”? (I had to look that up to give you an example).

I’ve spent almost 20 years reading and and modifying other people’s code. I’m an expert debugger in my field (CSS is my favourite, and raw JavaScript is my friend).
I don’t care what patterns people use (because what’s the “right” or “wrong” pattern anyway?)
What I care about is being able to quickly and easily read and understand code when I have to fix a problem that is causing difficulties for a customer or employer.

Software patterns and “best practice” don’t mean shit.

What matters are:

  • Business continuity
  • Security
  • Usability
  • Readability and Maintenance
  • Extensibility

Everything else is a side issue.