Graffiti Art and AI
Graffiti is an urban art form that features bold lines, vibrant colors, and symbolic shapes. It is an important tool for exploring new ideas and expressing your own voice in the world.
Researchers at Georgia Tech have developed a system called GTGraffiti that uses motion-captured movements to create graffiti art. The system imitates the fluidity of human movements and uses them to guide a robotic system to create its own tags.
Machine learning has become a new tool for digital artists. For example, image-generating software such as OpenAI’s DALL-E can create surreal images with just a brief text description. However, some artists are concerned that these systems may be using their copyrighted work without their consent.
One of the challenges of creating digitized street art is imitating human movement. While a robot can reproduce basic movements, it’s hard to replicate more nuanced gestures. Georgia Tech researchers have developed a system called GTGraffiti that learns the physical movements of human graffiti writers and reproduces them on a robotic system.
To identify graffiti, the team used a pre-trained deep learning architecture with weights optimized for image classification on the ImageNet dataset. Using these features, the model was able to accurately classify graffiti from non-graffiti images with an F1 score of 81%. In addition, it also successfully identified illegal graffiti locations. The team also used data augmentation to increase the size of the training dataset, which improved performance.
Graffiti is a form of art that can be used to express individuality and creativity. Using AI in graffiti is a way to automate and improve the process of creating art, but it can also lead to a homogenization of styles and a loss of authenticity. It will be up to artists and the broader community to decide how best to use this technology.
Using a python library, it was possible to create a model that can identify illegal graffiti. It uses a Deep Learning neural network to detect the image, and then returns the coordinates of the identified graffiti. This allows for a more accurate and efficient method of identifying graffiti, and can be used to support the current process carried out by Lisbon City Council’s staff of identifying walls that require painting for removing illegal graffiti.
The model is composed of a Flatten layer to convert the multidimensional output from the Keras application model into a single-dimension tensor, a Dense layer with three different activation functions, and a Dropout layer to prevent overfitting. The model was tested with three different hyperparameters to obtain optimal performance. The results show that the highest accuracy is achieved by a classifier with a density of 0.16, with a precision of 81% and a recall of 66%.
The research aims at finding an image classification model for graffiti that is capable of correctly identifying whether it is illegal or street art. The model should also be able to detect the coordinates of the area where the graffiti is located in the picture.
Several different models were tested using different conditions and parameters. The best results were obtained with a model that used the ResNet101 backbone and was pre-trained on the Coco dataset. The model was augmented with additional layers that incorporated data augmentation, which helps to increase diversity in the images and reduce overfitting.
Another example of AI art is NVIDIA Research’s GauGAN2. This model creates photorealistic images using any prompt, ranging from the shapes and colors of certain objects to specific visual styles like post-Impressionist paintings. It has the potential to transform even the most mundane objects into arresting digital canvases, but many artists have criticized it as a form of plagiarism.
Graffiti art is a vibrant form of expression with detailed lines, bold colors, and symbolic shapes and figures. But it’s also a challenging medium for a robot to master, as it requires fluid and nuanced movements that are impossible to replicate. Researchers at Georgia Tech have taken this challenge on by developing a system called GTGraffiti, which is trained to imitate the physical movements of human graffiti artists using motion capture data and a robotic cable-and-pulley system.
However, the authors note that a larger image dataset would be needed to improve model performance. Currently, the graffiti-ID model only has around 5.8 billion images to train on, which could lead to a number of false positives and negatives. In addition, the system is not fully able to detect graffiti on walls that have been painted over. This is a problem that the team will address in future work. Artists can use a tool from Spawning to find out whether their works are in the training set and can opt in or out of the process.