I had this one user who kept using an old report. It used a terribly provisioned db account and had to be changed.
We created a v2 that was at feature parity to v1 and told users to move off of v1. Slowly but surely it happened.
Except one user.
We put up nag screens. Delays on data return, everything we could go “carrot” them to the new version but they stuck with it.
Eventually I called the guy and just asked him, “Why are you still using the old version?”
His answer, “no one ever told me about the new version.”
I asked him if he got our email. He said no. I forwarded it to him.
“Oh.”
I asked him didn’t you read the nag screens? He said no.
I asked him, “The page doesn’t allow you to move on until you wait 90 seconds. Why didn’t you read it?”
“I didn’t think it was important.”
I learned an important lesson that day: never wait for all users to move. Once you have enough, start doing scream tests.
I was on-site for users learning our new program. Watched them do something, a dialog came up, and faster then i could catch what it was, they closed it. Dialogs are warnings or confirmations you know, and they did not know what it was…
So yeah, sometimes I do think there should be a wait time on the OK button.
Alert fatigue.
a dialog came up, and faster then i could catch what it was, they closed it
My wife does this and then asks me to fix whatever is broken. I’d be able to help if I was able to read the error!
So you just gave him an excuse to go have a coffee break and wondered why he didn’t care? :P
Bastard user from hell
Bastard user from hell
Every IT/software group needs to have one, otherwise you get complacent.
If some guy just minding his own business is your “user from hell” I am truly envious of your job
I feel like, he was not minding the business he was supposed to mind.
I used to work for a university trying to modernize how people got student and financial data. Over half my work was playing politics rooting out people who refused to change and going above their head. We had one guy who didn’t want to update a script on his end to include the bare minimum amount of ‘security’: a hard coded plain text password. It took me months and I had to go to his office to update his script and he complained about it the entire four minutes it took
Email to all:
“Due to budget constraints, resources will shift from $oldThingy to $newThingy. As a result, $oldThingy’s availability can no longer be maintained at the previous level.”
Then randomly kill oldThingy for more and more hours each day.
location /old_api { redirect /new_api }
(can’t be bothered to check the syntax).
If you have a major version change, it means that old API calls will break against the new API, assuming they are accurately following semver.
You’re absolutely right. In my mind “feature parity” got garbled into “backwards compatibility”.
A translation layer could be used, no? Check api version, translate any v1 specific calls into their v2 counterparts, then submit the v2 request?
This isn’t really efficient because when v2 gets updated now you have to update the translation layer as well.
Any improvements you made in v2 would likely not translate.
Essentially the best way is to provide users with an incentive to switch. Perhaps a new feature or more requests.
Publish v3, then add a translation layer for v2 to v3
This reminds me of QEMU internals. Virtual hardware support is paramount in an emulator, so nobody wants to break old code that was probably written by an expert who knew that piece of hardware better than you ever will.
“The other 98% of the codebase.”
I remember my engineer being such a hardass on using v2 of our API and when I went to implement a feature, v2 didn’t even have ANY of the endpoints I needed
I don’t get why anyone would publish v2 when it not really on feature-parity. Do companies really start releasing v2 endpoints slowly?
In my experience, it was an attempt to prune the stuff in old API that wasn’t useful. A successful attempt, since the backend working on it was in the same room as me and I could yell at him.
it’s called the strangler pattern, where the new version is layered on top of the old and gradually replaces it.
it usually doesn’t work.
Man I’ve never seen it not work. It’s pretty much the only pattern I use because it’s so successful. Meanwhile the other teams in my company have numerous failed migrations because they try to rewrite the entire thing at once instead of using the strangler fig pattern.
only time i’ve ever tried it was in a duolith consisting of over half a million lines of python, all of them critical.
I suspect that starting your own version of the API is the Software Designer / Software Architect version of Programmers’ “I know best so I’m going to do my part of the code my way which is different from everybody else’s”.
Mind you, at the very least good Software Architects should know best, but sometimes people get the title without having the chops for it.
If the APIs are meant for public consumption, requiring feature parity makes a lot of sense. But when it’s for internal use by your own developers, waiting means you are making a bunch of new API endpoints no one will ever use. People will write more and more code using the older endpoints and those endpoints will start getting changes that your new ones will need ported over.
I think if you are going to force people to use new endpoints, you’ll need them to either write the endpoints themselves or have a team member who can write it for them and account for this while planning. If getting a new endpoint requires putting in a JIRA ticket with a separate backend team, 4 planning meetings, and a month wait, people are just going to stick with what currently exists.
This is how we have 3 different APIs that sometimes do the same thing, but most times are incomplete when compared to the original v1, who in the meantime wasn’t properly maintained because we were “migrating” and now you have to use bits and pieces of the 3 of them to do anything.
It’s a nightmare. Can’t wait for the next genius to come along and start a v4, that will never be completed and will only re-implement parts of the old APIs while implementing all the new features
In my experience, having to write new v2 (or in my case v4) endpoints for most new features was expected.
v4??? At that point I’m just gonna guess the data
It was basically the same thing. In the code base, there was only v3 and v4. I never bothered to check what happened to v1 and v2, but I suspect they were used in an older, archived code base.
We’ve got 10 APIs, with a fraction of a percent of code coverage. None of the responses/requests/error messages/fetch services are standardized.
So unsurprisingly this project has been going on for 6 years.
Cheezus Crust!
It gets worse. Half the site is MVC, half is blazor. We depend on mainframe connections and external vendors, who in turn have their own API, which we have a wrapper API for.
The entire grahql fusion schema got nuked about two months ago, and we’re still panicking to fix it.
And each of our environments are half environments, that mesh with one another to create a data integrity hellscape.
It’s
turtlesrot all the way down.who in turn have their own API, which we have a wrapper API for.
Oh noooo!
That one is by far the worst to work with.
And by api v1 you meant stored proc, right ? 😝
This thread is giving me so much anxiety. I can’t read any more of these comments.