Question- We already understand what the customer needs, why do we need to take up time with Requirements gathering?
Answer- You'll thank me later
Blogs
Question- We already understand what the customer needs, why do we need to take up time with Requirements gathering?
Answer- You'll thank me later
In this article, we’ll walk through the steps to achieve a custom right click menu using Vue 3 and the Composition API. This can be done by suppressing the browser’s default context menu and displaying one of our own. By following modular design principles, this context menu will be reusable in other areas of an application.
Recently I've been using JUnit Pioneer, which is an extension library for JUnit Jupiter (JUnit 5). It contains a lot of useful annotations that are really easy to use in tests, for example to generate a range of numbers for input into a parameterized test. This is a presentation about Pioneer that I gave on March 4, 2021.
In case the embedded slideshow doesn’t work properly here is a link to the slides (opens in a new window/tab).
We have several interns this summer, and each Friday we’re doing a short presentation on a different software development topic. On June 28, I gave a short presentation on (unit) testing. This presentation is very light on code, and heavier on philosophy. I shared the slides on SlideShare and have embedded them below.
In case the embedded slideshow doesn’t work properly here is a link to the slides (opens in a new window/tab).
I’ve been using SDKMAN! for a while now to make it really easy to install and manage multiple versions of various SDKs like Java, Kotlin, Groovy, and so on. I recently gave a mini-talk on SDKMAN! and have embedded the slides below.
In case the embedded slideshow doesn’t work properly here is a link to the slides (opens in a new window/tab).
I just gave a short presentation on JUnit 5 at my company, Fortitude Technologies. JUnit 5 adds a bunch of useful features for developer testing such as parameterized tests, a more flexible extension model, and a lot more. Plus, it aims to provide a more clean separation between the testing platform that IDEs and build tools like Maven and Gradle use, versus the developer testing APIs. It also provides an easy migration path from JUnit 4 (or earlier) by letting you run JUnit 3, 4, and 5 tests in the same propject. Here are the slides:
Over the past year, several microservices I have worked on responded to specific events and then executed native OS processes, for example launching custom C++ applications, Python scripts, etc. In addition to simply launching processes, those services also needed to obtain information for executing processes upon request, or shut down processes upon receiving shut down events. A lot of what the services were doing was controlling native processes in response to specific external events, whether via JMS queues, Kafka topics, or even XML files dropped in specific directories.
Since the microservices were implemented in Java, I had to use the less-than-stellar Process
API, which provides only the most basic support. Even though a few additional features were added in Java 8 - such as being able to check if a process is alive using Process#isAlive
and waiting for process exit with a timeout - you still cannot obtain a handle to a running process by its process ID nor can you even get the process ID of a Process
object. As a result of the limitations I wrote a bunch of utilities that basically call out to native programs like grep
and pgrep
to gather information on running processes, child processes for a specific process ID, and so on. Even worse, to simply find the process ID for a Process
instance I used reflection to directly access the private pid
field in the java.lang.UNIXProcess
class (which first required checking that we were actually dealing with a UNIXProcess
instance, by comparing the class name as a string, since UNIXProcess
is package-private and thus you cannot obtain its Class
instance).
Most people writing and talking about Java 9 are excited about things like the new module system in Project Jigsaw; the Java shell/REPL; the HTTP/2 client; convenience factory methods for collections; and so on. But I am maybe even more excited about the process API improvements, since it means I can get rid of a lot of the hackery I used to obtain process information. Some of the information you can now obtain from a Process
instance includes:
For example, to obtain the process ID (written as a unit test, and using AssertJ assertions):
@Test
public void getPid() throws IOException {
ProcessBuilder builder = new ProcessBuilder("/bin/sleep", "5");
Process proc = builder.start();
assertThat(proc.getPid()).isGreaterThan(0);
}
Or, to obtain all sorts of different process metadata using ProcessHandle
(which is also new in JDK 9 via the info()
method in Process
):
@Test
public void processInfo() throws IOException {
ProcessBuilder builder = new ProcessBuilder("/bin/sleep", "5");
Process proc = builder.start();
ProcessHandle.Info info = proc.info();
assertThat(info.arguments().orElse(new String[] {})).containsExactly("5");
assertThat(info.command().orElse(null)).isEqualTo("/bin/sleep");
assertThat(info.commandLine().orElse(null)).isEqualTo("/bin/sleep 5");
assertThat(info.user().orElse(null)).isEqualTo(System.getProperty("user.name"));
assertThat(info.startInstant().orElse(null)).isLessThanOrEqualTo(Instant.now());
}
Note in the above test that every method in the ProcessHandle.Info
returns an Optional
, which is the reason for the orElse
in the assertions. Another thing that I really needed - and thankfully JDK 9 now provides - is the ability to get a handle to an existing process by its process ID using the ProcessHandle#of
method. Here is a simple example as a unit test:
@Test
public void getProcessHandleForExistingProcess() throws IOException {
ProcessBuilder builder = new ProcessBuilder("/bin/sleep", "5");
Process proc = builder.start();
long pid = proc.getPid();
ProcessHandle handle = ProcessHandle.of(pid).orElseThrow(IllegalStateException::new);
assertThat(handle.getPid()).isEqualTo(pid);
assertThat(handle.info().commandLine().orElse(null)).isEqualTo("/bin/sleep 5");
}
As with the ProcessHandle.Info
methods, ProcessHandle#of
returns an Optional
so again that is the reason for the orElseThrow
. In a real application you might take some other action if the returned Optional
is empty, or maybe you just throw an exception as the above test does.
As a last example, here is a test that launches a sleep
process, then streams all visible processes and finds the launched sleep
process:
@Test
public void allProcesses() throws IOException {
ProcessBuilder builder = new ProcessBuilder("/bin/sleep", "5");
builder.start();
String sleep = ProcessHandle.allProcesses()
.map(handle -> handle.info().command().orElse(String.valueOf(handle.getPid())))
.filter(cmd -> cmd.equals("/bin/sleep"))
.findFirst()
.orElse(null);
assertThat(sleep).isNotNull();
}
In the above test, since allProcesses
returns a Stream
we can use normal Java 8 stream API features like map
, filter
, and so on. In this example, we first map (transform) the ProcessHandle
to the command (i.e. "sleep") or the process ID if the command Optional
is empty. Next we filter on whether the command equals /bin/sleep
and call findFirst
which returns an Optional
, and finally use orElse
to return null
if the returned Optional
was empty. Of course the above test can fail if, for example, there already happens to be a /bin/sleep 5
process executing in the operating system but we won't really worry about that here.
One last piece of information that might be needed is the current process, i.e. a process needs get a handle to its own process. You can now accomplish this easily by calling ProcessHandle.current()
. The Javadoc notes that you cannot use the returned handle to destroy the current process, and says to use System#exit
instead.
In addition to the process information shown in the above examples, there is also a new onExit
method that returns a CompletableFuture
that "provides the ability to trigger dependent functions or actions that may be run synchronously or asynchronously upon process termination" according to the Javadoc. The following example shows an example that uses the native cmp
program to compare two files, and upon exit applies a lambda expression to check whether the exit value is zero (meaning the two files are identical). Finally, it uses the Future#get
method with a 1 second timeout (to avoid blocking indefinitely) to obtain the result:
Process proc = new ProcessBuilder("/usr/bin/cmp", "/tmp/file1.txt", "/tmp/file2.txt").start();
Future<Boolean> areIdentical = proc.onExit().thenApply(proc1 -> proc1.exitValue() == 0);
if (areIdentical.get(1, TimeUnit.SECONDS)) { ... }
So a big thanks to the Java team at Oracle (I can't believe I just thanked Oracle) for adding these new features! In the "real world" where systems are heterogenous and need to integrate in myriad ways, having a much more featureful and robust process API helps a lot for any system that needs to launch, monitor, and destroy native processes.
A few months ago I gave a short presentation on AWS Lambda to my company, Fortitude Technologies. AWS Lambda is basically a "serverless" framework that lets you deploy and run code in Amazon's cloud without managing, provisioning, or administering any servers whatsoever. Here are the slides:
In case the embedded slide show isn't working, here is a link to the slides on Slideshare.
In a previous post I described the very small sparkjava-testing library I created to make it really simple to test HTTP client code using the Spark micro-framework. It is basically one simple JUnit 4 rule (SparkServerRule
) that spins up a Spark HTTP server before tests run, and shuts it down once tests have executed. It can be used either as a @ClassRule
or as a @Rule
. Using @ClassRule
is normally what you want to do, which starts an HTTP server before any tests has run, and shuts it down afer all tests have finished.
In that post I mentioned that I needed to do an "incredibly awful hack" to reset the Spark HTTP server to non-secure mode so that, if tests run securely using a test keystore, other tests can also run either non-secure or secure, possibly with a different keystore. I also said the reason I did that was because "there is no way I found to easily reset security". The reason for all that nonsense was because I was using the static methods on the Spark
class such as port
, secure
, get
, post
, and so on. Using the static methods also implies only one server instance across all tests, which is also not so great.
Well, it turns out I didn't really dig deep enough into Spark's features, because there is a really simple way to spin up separate and independent Spark server instances. You simply use the Service#ignite
method to return an instance of Service
. You then configure the Service
however you want, e.g. change the port, add routes, filters, set the server to run securely, etc. Here's an example:
Service http = Service.ignite();
http.port(56789);
http.get("/hello", (req, resp) -> "Hello, Spark service!");
So now you can create as many servers as you want. This is exactly what is needed for the SparkServerRule
, which has been refactored to use Spark#ignite
to get separate servers for each test. It now has only one constructor which takes a ServiceInitializer
and can be used to do whatever configuration you need, add routes, filters, etc. Since ServiceInitializer
is a @FunctionalInterface
you can simply supply a lambda expression, which makes it cleaner. Here is a simple example:
@ClassRule
public static final SparkServerRule SPARK_SERVER = new SparkServerRule(http -> {
http.get("/ping", (request, response) -> "pong");
http.get("/health", (request, response) -> "healthy");
});
This is a rule that, before any test is run, spins up a Spark server on the default port 4567
with two GET routes, and shuts the server down after all tests have completed. To do things like change the port and IP address in addition to adding routes, you just call the appropriate methods on the Service
instance (in the example above, the http
object passed to the lambda). Here's an example:
@ClassRule
public static final SparkServerRule SPARK_SERVER = new SparkServerRule(https -> {
https.ipAddress("127.0.0.1");
https.port(56789);
URL resource = Resources.getResource("sample-keystore.jks");
https.secure(resource.getFile(), "password", null, null);
https.get("/ping", (request, response) -> "pong");
https.get("/health", (request, response) -> "healthy");
});
In this example, tests will be able to access a server with two secure (https) endpoints at IP 127.0.0.1
on port 56789
. So that's it. On the off chance someone was actually using this rule other than me, the migration path is really simple. You just need to configure the Service
instance passed in the SparkServerRule
constructor as shown above. Now, each server is totally independent which allows tests to run in parallel (assuming they're on different ports). And better, I was able to remove the hack where I used reflection to go under the covers of Spark and manipulate fields, etc. So, test away on that HTTP client code!
Testing HTTP client code can be a hassle. Your tests either need to run against a live HTTP server, or you somehow need to figure out how to send mock requests which is generally not easy in most libraries that I have used. The tests should also be fast, meaning you need a lightweight server that starts and stops quickly. Spinning up heavyweight web or application servers, or relying on a specialized test server, is generally error-prone, adds complexity and slows tests down. In projects I'm working on lately we are using Dropwizard, which provides first class testing support for testing JAX-RS resources and clients as JUnit rules. For example, it provides DropwizardClientRule, a JUnit rule that lets you implement JAX-RS resources as test doubles and starts and stops a simple Dropwizard application containing those resources. This works great if you are already using Dropwizard, but if not then a great alternative is Spark. Even if you are using Dropwizard, Spark can still work well as a test HTTP server.
Spark is self-described as a "micro framework for creating web applications in Java 8 with minimal effort". You can create the steroptypical "Hello World" in Spark like this (shamelessly copied from Spark's web site):
import static spark.Spark.get;
public class HelloWorld {
public static void main(String[] args) {
get("/hello", (req, res) -> "Hello World");
}
}
You can run this code and visit http://localhost:4567
in a browser or using a client tool like curl or httpie. Spark is a perfect fit for creating HTTP servers in tests (whether you call them unit tests, integration tests or something else is up to you, I will just call them tests here). I have created a very simple library sparkjava-testing that contains a JUnit rule for spinning up a Spark server for functional testing of HTTP clients. This library consists of one JUnit rule, the SparkServerRule
. You can annotate this rule with @ClassRule
or just @Rule
. Using @ClassRule
will start a Spark server one time before any test is run. Then your tests run, making requests to the HTTP server, and finally once all tests have finished the server is shut down. If you need true isolation between every single test, annotate the rule with @Rule
and a test Spark server will be started before each test and shut down after each test, meaning each test runs against a fresh server. (The SparkServerRule
is a JUnit 4 rule mainly because JUnit 5 is still in milestone releases, and because I have not actually used JUnit 5.)
To declare a class rule with a test Spark server with two endpoints, you can do this:
@ClassRule
public static final SparkServerRule SPARK_SERVER = new SparkServerRule(() -> {
get("/ping", (request, response) -> "pong");
get("/healthcheck", (request, response) -> "healthy");
});
The SparkServerRule
constructor takes a Runnable
which define the routes the server should respond to. In this example there are two HTTP GET routes, /ping
and /healthcheck
. You can of course implement the other HTTP verbs such as POST and PUT. You can then write tests using whatever client library you want. Here is an example test using a JAX-RS:
public void testSparkServerRule_HealthcheckRequest() {
client = ClientBuilder.newBuilder().build();
Response response = client.target(URI.create("http://localhost:4567/healthcheck"))
.request()
.get();
assertThat(response.getStatus()).isEqualTo(200);
assertThat(response.readEntity(String.class)).isEqualTo("healthy");
}
In the above test, client
is a JAX-RS Client
instance (it is an instance variable which is closed after each test). I'm using AssertJ assertions in this test. The main thing to note is that your client code must be parameterizable, so that the local Spark server URI can be injected instead of the actual production URI. When using the JAX-RS client as in this example, this means you need to be able to supply the test server URI to the Client#target
method. Spark runs on port 4567 by default, so the client in the test uses that port.
The SparkServerRule
has two other constructors: one that accepts a port in addition to the routes, and another that takes a SparkInitializer
. To start the test server on a different port, you can do this:
@ClassRule
public static final SparkServerRule SPARK_SERVER = new SparkServerRule(6543, () -> {
get("/ping", (request, response) -> "pong");
get("/healthcheck", (request, response) -> "healthy");
});
You can use the constuctor that takes a SparkInitializer
to customize the Spark server, for example in addition to changing the port you can also set the IP address and make the server secure. The SparkInitializer
is an @FunctionalInterface
with one method init()
, so you can use a lambda expression. For example:
@ClassRule
public static final SparkServerRule SPARK_SERVER = new SparkServerRule(
() -> {
Spark.ipAddress("127.0.0.1");
Spark.port(9876);
URL resource = Resources.getResource("sample-keystore.jks");
String file = resource.getFile();
Spark.secure(file, "password", null, null);
},
() -> {
get("/ping", (request, response) -> "pong");
get("/healthcheck", (request, response) -> "healthy");
});
The first argument is the initializer. It sets the IP address and port, and then loads a sample keystore and calls the Spark#secure
method to make the test sever accept HTTPS connections using a sample keystore. You might want to customize settings if running tests in parallel, specifically the port, to ensure parallel tests do not encounter port conflicts.
The last thing to note is that SparkServerRule
resets the port, IP address, and secure settings to the default values (4567
, 0.0.0.0
, and non-secure, respectively) when it shuts down the Spark server. If you use the SparkInitializer
to customize other settings (for example the server thread pool, static file location, before/after filters, etc.) those will not be reset, as they are not currently supported by SparkServerRule
. Last, resetting to non-secure mode required an incredibly awful hack because there is no way I found to easily reset security - you cannot just pass in a bunch of null
values to the Spark#secure
method as it will throw an exception, and there is no unsecure
method probably because the server was not intended to set and reset things a bunch of times like we want to do in test scenarios. If you're interested, go look at the code for the SparkServerRule
in the sparkjava-testing repository, but prepare thyself and get some cleaning supplies ready to wash away the dirty feeling you're sure to have after seeing it.
The ability to use SparkServerRule
to quickly and easily setup test HTTP servers, along with the ability to customize the port, IP address, and run securely intests has worked very well for my testing needs thus far. Note that unlike the above toy examples, you can implement more complicated logic in the routes, for example to return a 200 or a 404 for a GET request depending on a path parameter or request parameter value. But at the same time, don't implement extremely complex logic either. Most times I simply create separate routes when I need the test server to behave differently, for example to test various error conditions. Or, I might even choose to implement separate JUnit test classes for different server endpoints, so that each test focuses on only one endpoint and its various success and failure conditions. As is many times the case, the context will determine the best way to implement your tests. Happy testing!