I’ve been trying to update Gradle in my React Native PDF repository from the quite old version 2.2, to version 7.3, when I got the error Could not find method implementation() for arguments.

I’ve tried searching online and testing several suggestions, but most of them didn’t work.
Only after I’ve decided to update the, now deprecated JCenter repository, I’ve found the solution to this problem.

I’ve replaced jcenter() in my gradle.build file with mavenCentral(), like this:

1
2
3
4
5
6
7
8
9
buildscript {
repositories {
mavenCentral()
}

dependencies {
classpath 'com.android.tools.build:gradle:1.3.1'
}
}

Thinking that the maven central is the de-facto standard for Java packages, I hoped everything will work correctly with the current setup.
However I kept getting the same error.
When I checked the maven central, I’ve found that the last build for android tools was from 2017!
Obviously, that would not be compatible with the lastest Gradle.

It looks like Google have stopped updating the Maven repository and this package should now be downloaded from their own - Google repository.
After updating the gradle.build with the following settings, the error was resolved:

1
2
3
4
5
6
7
8
9
buildscript {
repositories {
jcenter()
}

dependencies {
classpath 'com.android.tools.build:gradle:1.3.1'
}
}

As a web software developer, you will often face problems like setting up various servers, be it web servers like Apache, nginx or Litespeed, or the working environment for a scripting language like PHP and Node.js. This setup brings a lot of compatibility and security issues with it. In the recent years, Docker has established itself as the “to-go” solution, when setting up a local or remote working environment. While on the local developer’s machine, it’s as easy as doing brew install docker or apt install docker, when it comes to creating a real live production server, it’s not that simple. Usually production servers should not be running any software as a root (administrator) user, to avoid security issues, and this is how Docker is started by default - with full system privileges. As you can imagine, having an insecure application in an enterprise environment is not desirable. Now there is a setup for Docker called “rootless mode”, that allows running the software with another user, that has restricted privileges. I will be showing how to install Docker + Docker Compose using terminal commands and automate the process, using Ansible - a server configuration tool.

Read More

If you’re here, you’ve probably heard that ElasticSearch is awesome and you can do many things, such as searching your databases, aggregating your data and monitoring your servers, collecting logs and auto-suggesting your users’ search input.

This article has the purpose to get you started with ElasticSearch by quickly instlling and running the database on a Linux or MacOS computer, in a few simple steps.

Prerequisites

For this tutorial, you will need two things:

  • Open terminal windows - as we will be executing some system commands
  • Internet connection

Downloading ElasticSearch

It’s recommended to download the latest version of the software, as it’s usually the most secure and offers best performance. At the time of writing this article, the latest of ElasticSearch is 8.3, but regardless of when you’re reading this article, you can find the latest version on this official page.

Without further ado, you can run this command, to download ElasticSearch on Linux:

1
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.3.2-linux-x86_64.tar.gz && tar -xzf elasticsearch-8.3.2-linux-x86_64.tar.gz
1

or run this one, if you’re using MacOS

1
curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.3.2-darwin-x86_64.tar.gz && tar -xzf elasticsearch-8.3.2-darwin-x86_64.tar.gz

From here on, the steps overlap, so the commands will be the same for either of these operating systems.

Starting ElasticSearch

The above command will have download the software archive and unarchived it to a folder called elasticsearch-8.3.2.
Navigate to that folder, using cd elasticsearch-8.3.2

There is no further configuration required, so you can start ElasticSearch using the following command:

1
./bin/elasticsearch

After a few seconds, you should see JVM (Java Virtual Machine) initializing and ElasticSearch preparing to start.

After everything is ready and the software have started, you will see notes likes this

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Elasticsearch security features have been automatically configured!
✅ Authentication is enabled and cluster connections are encrypted.

ℹ️ Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
***************

ℹ️ HTTP CA certificate SHA-256 fingerprint:
***************

ℹ️ Configure Kibana to use this cluster:
• Run Kibana and click the configuration link in the terminal when Kibana starts.
• Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes):
***************

ℹ️ Configure other nodes to join this cluster:
• On this node:
⁃ Create an enrollment token with `bin/elasticsearch-create-enrollment-token -s node`.
⁃ Uncomment the transport.host setting at the end of config/elasticsearch.yml.
⁃ Restart Elasticsearch.
• On other nodes:
⁃ Start Elasticsearch with `bin/elasticsearch --enrollment-token <token>`, using the enrollment token that you generated.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

Write down the password, for your elastic user, because you will need it to access ElasticSearch, directly through a browser, Kibana or any other UI.

To test your setup, you can open the URL to your ElasticSearch server in a browser or use CURL to make a request directly from the command line.
Note that the terminal window that we used so far will be blocked by ElasticSearch and if you want to use terminal, you will have to open another window.

1
curl --cacert $ES_HOME/config/certs/http_ca.crt -u elastic https://localhost:9200

If everything went fine, you will asked to login with username elastic and the password, generated by ElasticSearcg.
After that, you will be greeted with this message from the web page:

If you see the following message, you need to switch from using http to https:

Running in the background

As you can already see, running ElasticSearch in the terminal, like this, is not very convenient. The terminal windows is blocked and unusable, and ElasticSearch will stop if we close the terminal. To avoid these issues, we can start ElasticSearch in the background, where it will be running until we stop it, even after the terminal is closed or the computer is restarted. We can do that with the following command:

1
./bin/elasticsearch -d -p pid

Note: ElasticSearch will stop when your server is restarted or shutdown and will not start automatically when the server is turned on. To make ElasticSearch run on startup, you will have to install the system.d package.

API considerations

  • An API solely exists to deliver a SERVICE by being consumed by “clients”. It is similar to services providers in real life, be it a shop, a restaurant, the postal services etc… You want to deliver a good service and the clients to be “happy”. If you ever had to use and consume a third-party APIs in your projects, maybe you have experienced the following:

    • I wonder if there is an endpoint to request multiple results in a single request instead of sending a lot of single result requests?
    • Is it possible to filter the results? I don’t need all these.
    • Do I really need to provide all those hundred parameters that looks like they could be optional?
    • I followed the example but I keep getting an error with no explanation.
    • Where is the documentation?
    • Why the naming is so inconsistent? It’s confusing.

    When you are the one building the API, you also want to ask yourself those same questions. You are not building an API for the sake of building it, someone will use it! It can be yourself, your teammates, another team in the company or even clients who are paying for your product.

  • Modifying an API Contract is HARD, especially if your API is public. Any change on your API behavior can break the clients and their applications (it can cause crashes and affect millions of users). In practice, an API will always have to evolve to a certain degree. There are techniques to manage these changes like API versioning or backwards-compatibility support, but it will always be a tricky exercise.

That’s why you WANT to spend time on designing your API. It’s the phase where you can create early mistakes that may have a huge impact on your API usability. Those mistakes are also often the most difficult to fix at a later stage. The time spent can vary depending on the size of the API, if it’s public facing with a lot of clients etc… You don’t need 10 hours of design for a throw-away API only used for some prototype but keep that in mind.

⚠ Make sure you understand REST

If your answer to the question “What is REST for you?”

is just “it’s about using GET, POST and DELETE”.

It’s probably a good idea to refresh your memory a bit by reading the following article: https://restfulapi.net/.

It’s well written and concise. You also have more details if you want to dig a bit deeper.

I insist on having solid fundamentals because REST has a lot of misconceptions about what it does and doesn’t enforce.

For example and contrary to the popular belief, REST doesn’t tell you which HTTP method (GET, POST, PUT, DELETE) you should use for operations as REST is not even technically tied to HTTP.

It is therefore more productive to know exactly the constraints of the REST architecture, work with them and know which parts of the design require you to take more complex decisions.

Key-points

  • It’s bold but you NEED to know and understand REST to build good RESTful APIs.
  • Having solid foundations will let you focus on the more custom (and more interesting) parts of your design.

Naming and endpoint structure is hard but there are good practices out there

In practice, most of the public APIs follow some conventions considered as “reasonable”. By following the same conventions, you help other developers to understand your API faster as they are used to see those patterns. Here is a very good (and short) list of some popular naming conventions:

REST Resource Naming Guide. - https://restfulapi.net/resource-naming/

Among some of those naming guidelines:

  • Use nouns to represent resources: api/my-resource/, api/my-other-resource, api/my-resource/{id}/sub-resource
  • Avoid verbs especially if they are mirroring what the HTTP methods could perform (in an HTTP context): Avoid api/delete_my-resource and api get_my-resource when you can simple have HTTP DELETE api/my-resource and HTTP GET api/my-resource
  • BE CONSISTENT! You don’t want the same parameters to have totally different names in 2 similar endpoints.

I would lie if I say that every time we design APIs in our team everything goes well. We sometimes have “passionate” discussions and 2 developers may propose different designs that are technically correct and elegant. As mentioned earlier, there are a lot of things REST doesn’t enforce. At times like this, it’s not worth wasting too much time on it. Pick one, and stick with this design philosophy for your entire API. The key is consistency.

One of my personal tip is to read the documentation and the API contracts of popular open APIs accessible on the Internet. It gives good examples and can be source of inspiration. Here are some of my favorites:

Key-points

  • Using common naming conventions makes your API more predictable and easier to understand by other developers.
  • You don’t have to start from scratch and reinvent the wheel, a lot of people came up with those battle-tested naming and structure recommendations.
  • REST is flexible enough to give you freedom but always plan wisely.

Always have your design proposition be reviewed by your peers

The “passionate” discussions I have mentioned earlier are not considered a bad thing in our team. Quite the opposite, we encourage challenges when designing new systems. Having various people and their own knowledge, vision and skills can help to find mistakes, typos but also fundamental design issues. These issues can be costly at a later stage of the development so you want to detect them as early as possible.

We started recently to involve not only developers in that review process but also our QA engineers.

The QA Team will be the first real client using your API. As such, it is in a formidable position to detect validation mistakes, ask more details about unclear specifications and make sure the product scenarios are covered.

  • “What’s the exact format of that parameter representing a date? A timestamp? An ISO 8601?”
  • “What supposed to happen if no result is found?”
  • “Are we okay with an error or do we prefer an empty response?”

Everyone’s opinion is valuable as long as it’s communicated in a constructive way. When you give a comment, have an objection or wish to give a proposition, always explain properly your reasoning.

Key-points

  • More people and more point of views: Less blind spots.
  • It’s an excellent knowledge-transfer practice of the specs of the product when performed with a constructive mindset.

Document your API

I know a lot of developers don’t like writing it and find it boring. But if there is something you should not skip when building a Web API, it’s the DOCUMENTATION. Again, the main purpose of an API is to be used. You don’t want your clients to spend hours or days trying to figure out your endpoints location, the format of the parameters or the required headers.

The minimum should be the list of all endpoints with their:

  • HTTP methods
  • List of headers/path/query parameters with their validation
  • The format of the body
  • Response and error codes and response payloads schemas.

You can write your documentation with the OpenAPI specifications (formerly known as the Swagger Specification). It quite popular and there is an entire ecosystem of tools built on top of them. But as long as your documentation is accessible and you give enough information for your API to be used, it’s the most important.

It’s also easier to write the documentation as you design and review your API contract. It will serve as a proper reference for the implementation and the testing.

I suggest you to look at the documentation of the following APIs. You will notice that they go further than a simple endpoints list. They also describe the global concepts and the entities they are using and also have interactive parts:

Key-points

  • An API without documentation is almost useless.
  • The documentation serves as a reference for the different team members and roles. This can help avoid misunderstandings.
  • Having a documentation early will be useful on ALL the following steps of the development cycle.

Migrating GMail

Migrating YouTube

https://support.google.com/youtube/answer/3056283?hl=en

  • Make YouTube account with your new Google account
  • When adding your new account as manager, it will not show the name, portrait or email, until you’ve added it to the list
  • Accept the invitation from your new account
  • Wait 7 days (restriction for change of permissions)
  • Login with your old account and make your new account the primary owner
    After that, your old account will lose the right to edit the permissions on your new account

Migrating Google Drive

Install and configure TrueNAS

https://www.youtube.com/watch?v=V0XXHk147pw

Install and configure NextCloud on TrueNAS

https://www.youtube.com/watch?v=TgSiYpcZZPY

Configure reverse proxy

  • Increase upload file size limit

https://serverfault.com/questions/559451/nginx-client-max-body-size-per-location-block-with-php-frontcontroller-pattern
In response to this error - https://www.cyberciti.biz/faq/linux-unix-bsd-nginx-413-request-entity-too-large/

Migrating Google Chrome data

Migrating Contacts

Migrating AdSense

  • Create a regular Google account and verify it
  • Add the new account to your verified AdSense account
  • Activate the new account as an AdSense user
  • Make the new account an administrator of the AdSense account
  • Login to your new Google account and remove the old Google account from AdSense

To protect your application with a Single Sign-On (SSO), there are several things that you need to configure. One of them is the so-called Identity Provider (IDP).
Okta is a widely used enterprise identity provider. In this guide, I will show you how to create a developer’s account with Okta and start using their services with a test Open ID Connect (OIDC) application.

Read More

YouTube alert, then clicking on the button goes to 2-Step Verification Welcome page.
After that, it asks for login with a password.
At the end, it shows the profile page.

Like described here - https://support.google.com/chrome/thread/2583782/2-step-authentication-link-not-appearing-in-my-google-account?hl=en

If you are a part of Google Suite domain, keep reading to find out how to fix this, otherwise there is no other choice but to contact Google Support and hope that they can help you.

Read More

  1. Go to the branch office personally
  2. Bring a friend to translate
  3. Credit card benefits and choosing the best card
  4. Using card with Apple Pay
  5. About mobile apps
  • Not the best interface
  • Too many apps
  • Not fully functional
  • SMBC VPoint card has delayed usage reporting
  • 2 000 detected yen vs 1000 yen usage

Some time ago, I’ve searched for a simple alternative to Kafka, that I can use in a container, deployed in Kubernetes.
To give more context on what I needed, I’ll have to mention the complexity of setting up Kafka, as it has a dependency on Zookeeper, and having an extra layer to manage is costly.
However, I had one more requirement for the replacement - it has to be compatible with the Kafka protocol, as our systems were already integrated with Kafka.

Here comes Redpanda by Vectorized. It promised 100% Kafka compatibility, Kubernetes support, no Zookeeper and no JVM and 10x faster speed.
At first glance this solution seemed perfect, but when I went deeper into the rabbit hole, I’ve found a ton of issues.

Read More