360andev ryan harter facebook

@Eliminate("Boilerplate")

Everyone knows that Java programming involves a good bit of boilerplate code, but it also has some great features to help mitigate this. Compile time annotation processing allows developers to hook into the compile phase of a Java program and do just about anything. By processing annotated source code, developers can generate code to automate repetitive tasks, clean up the development source, or just make Java easier to deal with.

In this 360|AnDev talk by Ryan Harter, you’ll learn how Java annotations are processed, how to reduce repetitive processes, and how to build a simple annotation processor.


Boilerplate Java (0:34)

We all know that Java has boilerplate, and some might even argue that Java is boilerplate. We can’t really argue with that. To see an example, here’s a simple model class:


public class User {
	String username;
	String firstname;
	String lastname;
	int age;

	public String getUsername() {
		return username;
	}

	public String getFirstname() {
		return firstname;
	}

	public String getLastname() {
		return lastname;
	}

	public int getAge() {
		return age;
	}
}

This is a user. We have a username, firstname, lastname and age. Pretty simple. I didn’t even put any constructors in this. Overall, this is a pretty simple class.

It takes a fair bit of code. If you want to make it more useful, it takes even more. We have four properties in here. You might want to use the pattern builder. Instead of having a bunch of overloaded constructors that have all sorts of properties, builders can make this easy. What happens when you want to write a builder? Here’s our builder class:


public final class UserBuilder {
	private String username;
	private String firstName;
	private String lastName;
	private int age;
	public UserBuilder() {
}

public UserBuilder username(String username) {
	this.username = username;
	return this;
}

public UserBuilder firstName(String firstName) {
	this.firstName = firstName;
	return this;
}

We’re repeating many properties. We have a constructor, some setters for usernames, then we have even more setters.


		return this;
	}
	
	public UserBuilder lastName(String lastName) {
		this.lastName = lastName;
		return this;
	}

	public UserBuilder age(int age) {
		this.age = age;
		return this;
	}

	public User build() {
		User user = new User();
		user.username = this.username;
		user.firstName = this.firstName;
		user.lastName = this.lastName;
		user.age = this.age;
		return user;
	}
}

It should be pretty easy to understand what is needed for this based on that model class. If you write a builder, nothing really needs to change from one builder to another, aside from the properties of your model class. Why doesn’t Java do that for us?

Well, it can. You have to do a little work up front, though.

Annotation processing (2:03)

This introduction will focus on annotation processing and code generation. These two things put together will allow us to eliminate boilerplate code in our apps.

Annotation is a part of javac, which is the Java compiler. Annotation processing is run as part of the compilation step and it reads your annotated source code. If you’ve ever seen the @ in front of modifiers on methods, classes, or parameters, that’s an annotation. You annotate your source code and the annotation processor will read it.

It can do several things, but the main thing we’re going to talk about is generating .java source files. During the compilation phase, we can have a program that generates new code for us. We don’t want to have to write all this boilerplate code, and with code generators and the annotation processors, we only have to write the code generator once.

You might have a complex generator that’s doing a lot of work, but even if it takes you a week to write, you only have to do it once. If you have a hundred model classes, you don’t have to do it again for all hundred of them. Plus, you can trust the generated code. It’s generated the same way every time because it’s being done by the compiler. As long as you test it, then you can trust that it’s going to be generated the same way and that it’s going to work. You don’t have to write test for your generated code.

What makes up an annotation processor? (3:57)

We wouldn’t have an annotation processor without annotations. There are plenty of built-in annotations in Java already. I’m sure everybody has seen @Override, and I’m also sure nobody knows why we write @Override. (I don’t either, just to be clear.) It’s an annotation that’s built into Java. We can process the @Override annotation all we want.

There are also a bunch of JSRs out there for the @Inject annotation and things like that, which are already predefined. Then we have the annotation processing tool, which is a piece that a lot of people don’t quite know about or understand. It is the piece inside javac that does the actual work.

The last thing we need is the annotated code. You need a source set, your Java source files that you have annotated in some way.

Annotations (5:51)


@Target(ElementType.TYPE)
@Retention(RetentionPolicy.SOURCE)
public @interface Builder {

}

In our case, we’re going to make a code generating annotation processor that generates a builder, for which we need annotation. As far as I know, there isn’t a good one built in, so we’ll create our own.

Get more development news like this

We’re going to call it Builder. The @ tells the compiler that this is an annotation. We have the @Retention annotation that tells the compiler how long it needs to keep these annotations around. In our case, we’re using a source RetentionPolicy, which means that we need to keep these annotations on the Java source code.

We’re generating this code at compile time with javac. These annotations are not retained at runtime; there’s no reflection. Moving up the stack, we’ve got the Target, which tells us what this annotation can be applied to. Annotations are not freestanding elements. Rather, they are similar to metadata that you apply to elements in Java. In our case, we’re going to apply this one to the type element. The type element is anything like classes, interfaces, etc.

This annotation will be applied to our classes and interfaces. Other options would be things like method, parameter, field, etc. Now that we have our annotation, let’s move on to the processor.

Processor (8:35)


public class FooProcessor extends AbstractProcessor {

}

To get started with our processor, we need to extend AbstractProcessor. There is a processor interface, which AbstractProcessor implements, but AbstractProcessor takes care of a bunch of initialization for us. I’ve honestly never seen an annotation processor that directly implements processor and doesn’t just subclass AbstractProcessor.


public class FooProcessor extends AbstractProcessor {

	public FooProcessor() {
		super();
	}

}

Your annotation processor needs a public no-arg constructor. This has to do with the way that it’s initialized, which we’ll talk about a little later on. But that’s important. You can have other constructors as well, if you want, but you do need a public no-arg constructor.


public class FooProcessor extends AbstractProcessor {
	private Messager messager;
	private Filer filer;

	

	@Override public synchronized void init(
			ProcessingEnvironment processingEnv) {
		super.init(processingEnv);
		this.messager = processingEnv.getMessager();
		this.filer = processingEnv.getFiler();
	}

}

Then, we have this init method. This is an abstract method. This is where we collect different pieces of the processing environment. This is only called once on our processor. In our case, we’re passed this processing environment and what we get here are different things that allow us to do different interactions with the build system.

In this case, we’re going to get messager to store those in fields. The messager is how we send messages back to the user. If anything fails in our annotation processing pipeline, we can send messages, like error messages or warning messages back to the user using the messager. You can’t use the normal System.out, because this is all handled inside javac.

The other thing we’re going to get here is a filer. The filer is how you write files. The filer is managed in a way that you have a contained build directory, which is what the build environment knows about. You use the filer to write your new files. Those are just two examples. There are other things. There are type utilities. There are element utilities, which let you get different metadata. But init is where you collect all of this information on your processing environment, and this is only called once.


public class FooProcessor extends AbstractProcessor {

	

	@Override public SourceVersion getSupportedSourceVersion() {
		return SourceVersion.latestSupported();
	}

}

Next up, we’ve got a little bit of boilerplate. We have to return the supported source version. This is for compatibility reasons. This is like the target version on Android. Again, the only thing I’ve ever seen returned here from a processor is SourceVersion.latestSupported. Same thing you should do with your target versions. Always target the latest version of Android. We do the same thing here.

Another piece is the supported annotation types. Like I mentioned before, your annotation processor, it can support multiple annotations. It can support built-in annotations. It can support annotation that come from other dependencies, for instance, like Inject. Or it can support your own.


public class FooProcessor extends AbstractProcessor {

	

	@Override public Set<String> getSupportedAnnotationTypes() {
		return ImmutableSet.of(Builder.class.getCanonicalName());
	}

}

What we need to do here is return a set of fully qualified class names of all of the annotations that we want given to us. As an optimization for the compilation project, the annotation processing tool sort of builds up it’s internal manifest of all of the annotations in each class, and it’s only going to give us the annotations that we say in this method that we care about. We’re not processing arbitrary files or anything like that. We have to tell it what we support here.

If you do want to support any annotation then you can use return star as a string in your set, and that’s a wild card. You probably don’t need to do that, though. In our case, we’re going to return the canonical class name, which is the fully-qualified class name of our builder class in this set. This is just telling the annotation processing tool the only annotations we care about are builder annotations.

Lastly, we have the process method. This is where the magic happens. All of those other things will only get called once.


public class FooProcessor extends AbstractProcessor {

	

	@Override
	public boolean process(Set<? extends TypeElement> annotations,
							RoundEnvironment roundEnv) {

		...
	}
}

This will get called multiple times, which we’ll go over in a minute, but this is where you do the actual processing. Inside the RoundEnvironment, the annotation processing tool gives you all of the information you need about the classes in the class path that have been annotated with annotations that are important to you. We had a processing environment before, but now we have this RoundEnvironment. In order to understand what RoundEnvironment is, you need to understand how the annotation processing tool works, and what rounds are.

I’ve got some fancy little animations to help us understand that. Here we have Java source files on the left in these blue boxes and when you build your project, and you go through the compilation phase, the annotation processing tool is going to suck those in and process them. For some of them, during the processing, it’s going to generate new source files. That’s those yellow-orange-gold looking things on the bottom. If we had the right color profile, they might show up yellow. That’s round one.

But now we’ve got all these new source files, and they haven’t been processed yet. Now we go into round two of annotation processing. You can see for round two, we go through, we process all of these new classes that didn’t exist before, and in our case, we’re not generating anything new here. Those go into the completed state. That’s what processing rounds means. What this allows us to do is this means that your annotation processor, when it generates code, it can generate annotated source code. Then that processor or another one, on your class path, can process those annotations again.

This turns into this whole big complicated thing, and basically what happens is the annotation processing tool is just going to go through rounds until there’s nothing left to process. The second to last thing we need to know about annotation processors is how they’re discovered.


com.ryanharter.example.annotations.BuilderProcessor

META-INF/services/javax.annotation.processing.Processor

The annotation processing tool uses the service loader, which is an API in Java 1.5, which allows us to have a plugin architecture. This is why we need that no-arg constructor on our processor. You have a file in your JAR, in META-INF/services, with the fully qualified name of some interface that you implement. In our case, it’s Processor. That’s the annotation processing interface.

Within that file, on a separate line, you put the fully-qualified name of all of your classes that implement that, that you want to be discovered. In our case, we only have one. We’re going to call it a BuilderProcessor, because we’re processing these builder classes. If we had more than one, we would put them all on separate lines. This is how the annotation processing tool discovers what annotation processors are available. What this means is that in order to use an annotation processor, all we have to do is have it on the class path.


dependencies {
	compile project(':annotation')

	apt project(':processor')
}

There’s no code you have to write. There’s no command line arguments you need to add. You just need to add it to your class path. But things get a little more complicated.

Normally when you add dependencies to your project, you’ll add compile dependencies. You add them to the compile configuration. We’ve all seen this. What that does is that includes that dependency and all of its dependencies in your binary. On Android, that would be your APK. In straight Java programs, that would be your JAR or transient dependencies of your library. We don’t want that. This is a compile time only dependency. We don’t need the processor at runtime.

We use the APT configuration for annotation processing tool, and what that means is that our processor and its dependencies are only included at compile time. If you’re considering using an annotation processor, something like AutoValue or Parcelable, anything like that, and you want to look at the method count so you don’t go over the DEX limit, as long as put it in the right configuration, you don’t need to worry about that. In my processors, for AutoValue, I use Guava, and everyone’s like, “Oh no, Guava!” But it’s only used at compile time, so who cares? It’s not included in your app. If you accidentally include it in the compile configuration, it will work just fine, but all of that unnecessary baggage will be packaged within your app.

This APT configuration, this is not included in the Gradle Android plugin or Gradle by default. You have to use an extra plugin for that. If you’re doing an Android project, you would want to use Hugo’s Android APT plugin. If you’re making a straight up Java project, you want to use the Gradle APT plugin. That pretty much covers it for annotation processors. The next piece that we need to talk about is code generation.

Code Generation (18:37)

An annotation processor without code generation can do a lot of other useful things, but we’re going to talk about code generation. What is code generation? Code generation pairs really well with annotation processors, because this is a compile time thing that you can let run. We use it to generate new Java source files. I like to use JavaPoet from the guys at Square. It represents your Java sources as model objects, as POJOs.

A little bit of an introduction to JavaPoet, JavaPoet uses a fluent API with builders to help you build representations of your classes, methods, etc. And it’s based on specs. If you think of a spec, just like when you’re writing an app, and your business department gives you a spec of what they need it tells you how to build the app. Specs in JavaPoet work the same way. These are classes that define how to build elements in Java.

You have a TypeSpec. Remember what we learned before that a type, as far as Java elements are concerned, is interfaces, classes, any of those high-level types, enums, etc. You have MethodSpecs. Everyone knows what a method is. You have ParameterSpecs and FieldSpecs. All of these major elements in Java have specs, so you can define how to build these elements. Let’s look at an example:


public final class UserBuilder {
	// fields
	private String username;
	

	// methods
	public UserBuilder username(String username) {
		this.username = username;
		return this;
	}
	
}

Here we have our UserBuilder. I’ve trimmed it down to its most basic components, but what we have here is we have our type. It’s a class type. We have fields, in this case username and then a bunch of other fields. We have methods. We’re going to have the setters, and then we’re going to have our build method. There is also going to be a constructor, which is a type of method.

To start out, when doing code generation with JavaPoet, I like to work from the inside out. I like to work from fields, from methods, that kind of thing, out. What that allows you to do is fields, a lot of times, get referenced in methods, and so if you generate your fields first, then you can reference them easily. In our case, we have a little hiccup in that. When we get started, you’ll notice that UserBuilder is referenced from a method, that enclosing type. We can’t reference it until we have created some way to reference it, and for that we use names.


public final class UserBuilder

String builderName = String.format("%sBuilder", type.getSimpleName());
ClassName builderType = ClassName.get(packageName, builderName);

JavaPoet has a concept of names, so type names and class names. They are a way to reference things. They could be preexisting classes in the Jave APIs or the Android framework. Or they could be new things that haven’t been created yet. In our case, we need a way to reference our UserBuilder as the return type of a method. We’re going to go ahead and build that.

Since we don’t happen to know that the object that’s annotated is called user, we’re going to build, using a string format. We’re going to build the name of that. Then we’re going to make a class name. We give it the package and the actual name of the object, and now we can reference that builder type anywhere. JavaPoet is really good at when we use names in places, it takes care of keeping track of things like imports. We don’t write any import statements when using JavaPoet. It does that for us. That’s why we want to use these names, instead of just hard coding those.

Now that we have that, let’s go back, and like I said, I like to start from the inside out. We’re going to start with our fields. In this case, we’re just looking at one field.


private String username;

This is pretty simple. It’s one line, but when you’re generating code, you need to think about things a little bit differently. You need to analyze what goes into a field. First of all we have modifiers, in our case, private. It could be private, static, final, etc. Those are all modifiers on the field. We have types. This one’s a String. Then we have the name itself. This one is called username.


private String username;


FieldSpec username = FieldSpec
	.builder(String.class, "username", Modifier.PRIVATE)
	.build();

JavaPoet gives us really good APIs to generate that based on all of those components once you break it down into its pieces. We’re going to use a FieldSpec. As you can see, all of those components are parameters on the method, on the builder for this. That’s how we tell JavaPoet how to build that field.


public final class UserBuilder {
	// fields
	private String username;
	

	// methods
	public UserBuilder username(String username) {
		this.username = username;
		return this;
	}
	
}

Going back to our class, now we have a method. Methods are a little more complex, but have a lot of the same components. We have modifiers. We have a return type, which is similar to a fields type, but it’s the return type. We have the name. We have zero or more parameters. Then we have statements, an arbitrary number of statements. In JavaPoet, you can see this is a little more complex. But when we break it down, what this gives us is we have our modifiers.


MethodSpec usernameSetter = MethodSpec.methodBuilder("username")
	.addModifiers(PUBLIC)
	.returns(builderType)
	.addParameter(String.class, "username")
	.addStatement("this.$N = username", username)
	.addStatement("return this")
	.build();

We can add as many as we want of whatever type. We have the return type. We’re working across the line, the same way you would write this code. We have the name. We have our parameters, and because of this fluent builder API, you can add more parameters, as many as you want. That parameter is of type String, and it’s called username. Then we have statements.

You’ll notice that $N is a formatting argument, and it’s a bit more specialized than String.format in Java. What that $N does is tell it to use the name of that username object. What this allows us to do is take care of these imports. If we used a class name or a type name for some other object like string, and we reference it here, JavaPoet is smart enough to know these are the types that we’re using, make sure these are imported. There is a bunch of other placeholder arguments, and we’ll see a few more of those coming up in a minute. So that’s our method.


public final class UserBuilder {
	// fields
	private String username;
	

	// methods
	public UserBuilder username(String username) {
		this.username = username;
		return this;
	}
		
}

Now the last piece that we have to create is our actual type, the class. Again, we break it down. In this case, we have multiple modifiers, public final. Then we have a name and we have the content of this class. In JavaPoet, we use a TypeSpec.


TypeSpec builder = TypeSpec.classBuilder(builderType)
	.addModifiers(PUBLIC, FINAL)
	.addField(username)
	.addMethod(usernameSetter)
	.build();

We give it our modifiers, public final. We give it the name, and that’s that class name that we generated before. We’re going to add that username field that we created before, and we’re also going to add the method of that username method. TypeSpecs don’t contain statements. They contain methods, fields, etc. We can add all of those to our type.


public final class UserBuilder {

}

TypeSpec builder = TypeSpec.classBuilder(builderType)
	.addModifiers(PUBLIC, FINAL)
	.addField(username)
	.addMethod(usernameSetter)
	.build();

Now we have a TypeSpec, and this TypeSpec contains our FieldSpec and our MethodSpec. JavaPoet knows how to create our class and print out the source code based on this TypeSpec. Now let’s put it all together.


@Override
public boolean process(Set<? extends TypeElement> annotations,
		RoundEnvironment roundEnv) {
	for (Element el : roundEnv.getElementsAnnotatedWith(Builder.class)) {

		// get element metadata

		// create private fields and public setters

		// create the build method
	
		// create the builder type

		// write the java source file
	}
}

We talked about that process method in our annotation processor, which is where the magic happens. The first thing that we need to do in there is we need to get all of the elements that have been annotated with our builder annotation. Now remember, you can return a set of as many or as few annotations as you want from the supported annotations. This is important that we don’t just get all the elements. We get the ones that are annotated the way we want, because we could have another pass in this processor that’s going to process some other annotation, for instance.

We’re going to get all of the elements out of that processing round. Then we need to do these five things on it and we’re going to go into these in a little more detail.


// get element metadata
String packageName = getPackageName(type);
String targetName = lowerCamelCase(type.getSimpleName().toString());
Set<VariableElement> vars = getNonPrivateVariables(type);

String builderName = String.format("%sBuilder", type.getSimpleName());
ClassName builderType = ClassName.get(packageName, builderName);

The first thing we do is we get the element metadata. The element that we get out of this round environment has a lot of metadata. It has the class. It has the package. It has modifiers. It has all of that information about these classes that have been annotated. We’re going to get some of that information that we’re going to need like the packageName. I wrote that as a method. You have to get the fully-qualified name and find the last period. It’s a pain. The targetName is what this class is going to be as a variable, so we lower camel case it. Then the variables, so in order to make a builder for a class, we need to know all of the fields in that class.

One of the things that I mentioned before is that annotation processors can generate new Java source files. One of the things that they cannot do is modify existing Java source code. That’s an important distinction. There are some tools that will do that. Retrolambda does some magic on your existing source, not your source files, but your class files. There is also Lombok, Project Lombok. That’s working a little differently. That’s not an annotation processing tool like this. That does class file magic. It’s actually going into your compiled class files and modifying the byte code. There are a lot of side effects of that. It can be really powerful, but it also makes things difficult to debug and some other things. We’re not going to be doing that.

That means that our generated code has to live by the same rules as all of our other code. When I am getting the nonprivate member variables here, the reason we’re getting the nonprivate ones is because our generated class isn’t going to be able to access the private ones. That’s a limitation. Some tools like AutoValue, they use things like abstract classes, static factory methods, that sort of thing, to get around this limitation.

We’re going to step through some of the metadata of that element and get all of the NonPrivateVariables, and then this is just building that class name that we saw before. The next thing we have to do is create those private fields and public setters.


// create private fields and public setters
List<FieldSpec> fields = new ArrayList<FieldSpec>(vars.size());
List<MethodSpec> setters = new ArrayList<MethodSpec>(vars.size());
for (VariableElement var : vars) {
	TypeName typeName = TypeName.get(var.asType());
	String name = var.getSimpleName().toString();

	// create the field
	fields.add(FieldSpec.builder(typeName, name, PRIVATE).build());

	// create the setter
	setters.add(MethodSpec.methodBuilder(name)
		.addModifiers(PUBLIC)
		.returns(builderType)
		.addParameter(typeName, name)
		.addStatement("this.$N = $N", name, name)
		.addStatement("return this")
		.build());
}

We did a specific example before when we knew exactly what to expect. But in this case, this could be running over a hundred model classes, and we don’t know anything about those. So we’re going to be collecting these, and you’ll see why later, but we’re going to be collecting field and method specs, and we’re going to step through all of those variables that we found. Get some metadata about those variables. We’re going to get the TypeName of the variable, which includes the package, the class name. Then we’re going to get the SimpleName of that variable, which is username, firstname, lastname, and all that stuff and create our field.

This is the exact code we saw before. Only instead of hard coding values, we’re using them from that metadata that we gathered. We create a FieldSpec and then we create a MethodSpec. This again is exactly the same as we saw before, except we’re not using hard coded values. This is going to be a public method with the same name as the field. It has some statements in there to set the local property.


// create the build method
TypeName targetType = TypeName.get(type.asType());
MethodSpec.Builder buildMethodBuilder =
	MethodSpec.methodBuilder("build")
		.addModifiers(PUBLIC)
		.returns(targetType)
		.addStatement("$1T $2N = new $1T()", targetType, targetName);

for (FieldSpec field : fields) {
	buildMethodBuilder
		.addStatement("$1N.$2N = this.$2N", targetName, field);
}

buildMethodBuilder.addStatement("return $N", targetName);
MethodSpec buildMethod = buildMethodBuilder.build();

That’s the same code we saw before. The next thing we need to add, which we didn’t look at before, is the build method. A builder needs a build method. It’s going to be a public method called build that returns our targetType, in this case a user object. Then we need to add our initialization line.

Here’s another example of, this is indexed placeholders, and so the $1T means the type of the first parameter. $2N means the name of the second parameter, and so on. We add that statement. This is going to end up being user, user equals new user. Then we step through every one of those fields that we collected before. We set the property in the new user object that we’re creating. Then our build method is going to return that user object. That’s it. That’s our build method. We have to create the builderType.


// create the builder
TypeSpec builder = TypeSpec.classBuilder(builderType)
	.addModifiers(PUBLIC, FINAL)
	.addFields(fields)
	.addMethods(setters)
	.addMethod(buildMethod)
	.build();

Again, more of what we saw before, it’s a public final class with our builder type name. We add all the fields, all the methods and then that build method that we created before. Here is the last piece we need to write this to a source file.


// write java source file
JavaFile file = JavaFile
	.builder(builderType.packageName(), builder.build())
	.build();
try {
	file.writeTo(filer);
} catch (IOException e) {
	messager.printMessage(Diagnostic.Kind.ERROR,
		"Failed to write file for element", el);
}

JavaPoet has this Java file object where we give it a packageName in a TypeSpec, and it will know from there how to write it in the correct directory and all of that. All we do is tell it to write to that filer. Remember that filer we collected in the init state? Then you can see if there are any sort of errors, if our disk is full, if anything fails along the way, we use that messager. Instead of something like System.out, we use that messager to print an error message. This will fail the compilation with a nice, handy little error message telling the user exactly what went wrong.

To recap, this is our annotation processor. This is the whole thing:


@Override
public boolean process(Set<? extends TypeElement> annotations,
		RoundEnvironment roundEnv) {
	for (Element el : roundEnv.getElementsAnnotatedWith(Builder.class)) {

		// get element metadata
		// create private fields and public setters
		// create the build method
		// create the builder type
		// write the java source file
	}
}

Now that we’ve written this once we’ve written tests for it that make sure that the outputted source code is exactly what we expect. How do we use it? Once we add this processor to our annotation processing tools configuration in Gradle, we have our user object here:


public class User {
	String username;
	String firstName;
	String lastName;
	int age;
}

I’ve trimmed it down a bit, all we do is we add the builder annotation to it.


@Builder public class User {
	String username;
	String firstName;
	String lastName;
	int age;
}

Now you hit Command + F9, build your project, and we have this user builder, which gives us this really nice, easy API to build our user objects.


@Builder public class User {
	String username;
	String firstName;
	String lastName;
	int age;
}

User user = new UserBuilder()
	.username("rharter")
	.firstName("Ryan")
	.lastName("Harter")
	.age(30)
	.build();

All of the code within that user builder was generated for us. We don’t have to write tests for it. We can guarantee that it’s been generated the same every time. If we have a hundred objects that we need to generate builders for, we only had to write that code once.

Using our annotation / Conclusion (35:02)

That in a nutshell is annotation processing and code generation and how it can help you. There are a lot of good examples out there like AutoValue for immutable value types, all of its extensions. There’s Parceler. There are some database layer tools that use annotation processes and generate code to deal with this. Other great tools like Butter Knife. And with that, that’s it for annotation processors.

Q & A (40:58)

Q: How does the IDE know about these generated classes?

Ryan: It’s a bit tricky, and it’s a pain. What you have to do is these generated classes end up in your build directory. If you peruse through your build directory and there is a generated folder in there for all generated sources. When you configure your module in the IDE, you need to set that as a source set. In a lot of cases, that happens automatically. Android Studio is usually pretty good about picking up generated sources. Sometimes you’ll have to kind of futz through and set that up on the module yourself.

Q: If you’re generating multiple classes, can you reference something in generated code from another?

Ryan: We have our user. Maybe, we’ll have a transaction object, which will also be generated but will reference that generated user code. That is something you can do. It gets a little tricky, depending on the hierarchy, but there’s nothing in the annotation processing pipeline that prevents you from doing that. The challenge there is when you get all of your annotated elements, you need to build up your graph to figure out what’s going to need to reference what and make sure that you don’t mess up the graphs. So you’re referencing something that you decide later isn’t going to be generated. Now you’re referencing dead code. So it gets a little tricky, but that’s all. There’s nothing in the framework that prevents you from doing that.

Q: Does the order of processing rounds matter?

Ryan: Round one is going to be all of your source files. Round two is going to be anything generated from round one that needs to be processed. You have a pool of things yet to be processed, and you empty it, you process stuff, and you might be putting more stuff in that pool. Once these are processed, they go in your completed pool. As long as that pool is not empty, then you have a second round. You don’t really have control over it, aside from if you’re generating annotated code. It will come back for round two.

@Eliminate(“Boilerplate”) Resources

About the content

This talk was delivered live in July 2016 at 360 AnDev. The video was recorded, produced, and transcribed by Realm, and is published here with the permission of the conference organizers.

Ryan Harter

Ryan is a multi-skilled engineer who loves making awesome apps for Android. He spends a lot of time in the Android source, and shares what he finds with everyone through blog posts, speaking engagements, and just helping out other developers and working on open source software. He’s built apps for lots of clients, and really enjoys graphics and OpenGL related work. When he’s not coding, you’ll find him traveling the world, skateboarding, biking, or playing music.

4 design patterns for a RESTless mobile integration »

close