content top

Open Source Hardware Educational Robotics

atto

Atto Educacional toolkit

“The greater importance of free and open source hardware is that the student can master the technology and gain its autonomy: this way, he/she can effectively innovate, creating new products from open source solutions”. The phrase is credited to professor Douglas Sulis da Costa, a free and open source software enthusiast and one of the founders of Atto Educacional, focused on teaching robotics to children and adolescents in high school. Douglas attended the 16th International Free Software Forum, recently held in Porto Alegre, Brazil, where he took part in a roundtable with the theme “Open Source Hardware Educational Robotics”.

Besides him, Claudio Olmedo, a well-known militant of free hardware in Brazil and co-founder of Centro Maker, the first company specialized in free hardware for Makers, also attended the meeting, as well as Enoque Alves, Master in Computer Science and a member of “Jabuti Edu” project, and Germano Postal, Master in Mechanical Engineering, who moderated the debate. The main subject was basically “how Brazilian industry can promote and make money with free and open source hardware development?”, a question raised by many, and always controversial by the “proprietary nature” of computer systems developed in Brazil.

The good news for the hardware and free software community and enthusiasts is that this scenario has been changing in recent years, albeit at a slow pace. Professor Douglas, for example, after much applying Piaget dynamics when teaching robotics at the classroom, stimulating children and adolescents to think about a problem and collaboratively get a solution using basic concepts of engineering, electronics and computer programming, said that only in 2006 he achieved the financial support to create Atto Educacional.

 

“Jabuti Edu” Project

Alongside Atto Educacional proposal, the “Jabuti Edu” project introduces principles of programming into play through an educational robot created using a 3D printer. Developed collaboratively by jabutiedu.org community, the robot works similarly to a radio-controlled car, and it is possible to program short sequences of movements and observe the results using only a browser on a mobile device. The central idea of the project is to transform abstract knowledge in concrete knowledge using Logo programming language, aimed to children.

The Jabuti Edu can be used by children as young as 4 years old, since basic education, and then introducing the learning of electronics and robotics in pre-adolescence, between 10 and 12 years, followed by hardware scanning and free and open source software from 15 to 16 years. A curiosity: the name “Jabuti”, a typical Brazilian species, is an allusion to the “turtle” simulation presented in Logo programming language environment.

 

For Makers by Makers

Imagine a company dedicated to helping people to get a renewed vigor to robotics and free and open source hardware ecosystem innovations (something known by the term “Makers” in technology communities). That is actually the challenge of a company called Centro Maker, which aims to popularize free and open source hardware — and therefore free and open source software too — helping the “Makers” to become truly social entrepreneurs in their professional fields.

Claudio Olmedo, one of the co-founders of Centro Maker, launched at Campus Party Brazil 2015 a project called “The Chestnut” to foster free and open source hardware development. The intention is that “The Chestnut” project costs only 1 dollar, so it can be accessible to anyone, anywhere in the world, focusing on developing countries. The project is underway and should be presented during the 1st Latin American Free Hardware Forum in October, which is part of 12th Latin American Free Software Conference (Latinoware).

 

References

Educational Robotics: https://en.wikipedia.org/wiki/Educational_robotics
Jabuti Edu: http://jabutiedu.org
Atto Educacional on YouTube: https://www.youtube.com/channel/UC81mJMwBYtbDydDtPo7e3Gg/videos
Jabuti Edu – Use Case: https://www.youtube.com/watch?v=4Vm1GMUr258
Centro Maker: http://www.centromaker.com/
Latinoware 2015: http://latinoware.org/

Read More

How to Build Up a Free Culture?

Children in Porto Alegre city (south of Brazil) learning how to use free open source software.

Educommunication, open source robotics, collaborative peer production and the use of open source technology based on Linux operating system. These are the premises that guide professors Cristina Santos, Daniela Bortolon and Jacqueline Aguiar on a daily basis to promote free culture in the context of education for children and teenagers in Porto Alegre city, Brazil.

But first of all, what “free culture” stands for? And what is your relation in the educational context? In a two-hour meeting in the International Free Software Forum (FISL 16), held in Porto Alegre between July 8th – 11th, the educators highlighted the main differences between the subject’s past and present. “How to form a critical person, capable of transforming the world and the culture we live in today?” – asked Cristina Santos in her talk.

Such is the importance of the free culture concept to develop a thinking and active person in his/her way of living. And the biggest challenge for the implementation of such a way of thinking about education, they say, came up first in faculty strength, naturally averse to sensible changes in their school environments, and also on the fascinating world of discovery of free technologies and the benefits they can bring to both learning students and instructors.

The starting point was the installation of Linux Educacional 4.0 distribution on computers in schools, offering a series of educational programs and a friendly interface that facilitates its use and approach, inviting educators and students to get involved in new possibilities and modalities to learn. The next step was the adoption of measures aiming to deconstruct the negative memory associated with the use of Linux as a free operating system. One of them was the creation of “Linux Sharing” group — “Linux Compartilhando”, in Brazilian Portuguese — formed in 2013; today it has over 100 active members. Through the group, it was possible to whet the curiosity of high school students to seek and share knowledge with classmates and other classes of schools they attend.

Another initiative was the creation of an e-learning course and a virtual forum based on Edmodo environment. Also the introduction of open source programs for extracurricular work was also very important, like OpenShot video editor, Audacity audio editor, LibreOffice package — which is Brazilian — among many others. In 2015, the pioneering work of the teachers continues to expand the network of knowledge in other cities in the countryside.

References

How to build up a free culture? (in Brazilian Portuguese)evidosol.textolivre.org/papers/2015/upload/84.pdf
Linux Educacional 4.0: http://linuxeducacional.c3sl.ufpr.br/LE4/
OpenShot: http://www.openshotvideo.com/
Audacity: http://audacityteam.org/
LibreOffice: https://pt-br.libreoffice.org/

Personal blogs:
https://formacaoemsoftwarelivre.wordpress.com/
https://midiasescolares.wordpress.com/
http://aprendendocomrobotica.blogspot.com.br/
http://estagiarios2015.blogspot.com.br/
http://websmed.portoalegre.rs.gov.br/escolas/revistavirtualagora/
http://mvinclusaodigital.blogspot.com.br/

 

Read More

Your Website as Fast as Google Using ElasticSearch

Many websites are reference in their business due to content relevance. But is not enough just offer information, product or service quality, it is necessary to make it easier for user to find what he or she seeks so much. And why is that? Because when the user does not find what he/she needs, then the best to do is give up and leave the website.

To avoid this, what should you do? One solution is to use a MySQL Full Text Search mechanism that apparently will solve the problem for a while. But let’s say the CEO of your online store decides to launch a big promo sales on Black Friday (for example) and, to boost access and sell even more, promotes a huge campaign using Google Adwords and Facebook Ads. Suddenly, the stream of people desperately searching for sales on the website increases 10 times. And therein it gets tricky for you who decided to use the Full Text Search functionality.

And then comes the question: do you were actually prepared to face this situation? To avoid inopportune surprises, Breno Oliveira, Sun Certified Java Programmer and Scrum Master Certified web developer, presents how ElasticSearch will help you on this.

 

ElasticSearch

The ElasticSearch is an open source search engine built on top of Apache Lucene, providing a powerful full-text search engine. It holds a friendly RESTFul API, real-time data, high availability (HA), guidance documents, among other features. Companies like GitHub, Twitter, Google, Ebay, FourSquare, Bloomberg, The Guardian and Yelp already use ElasticSearch in production to search and add in real-time basis.

 

Installing ElasticSearch

ElasticSearch installation is actually quite simple. Just download the latest version, unzip it and run bin/elasticsearch. Or access bin/elasticsearch.bat and make a $ curl -XGET http://localhost:9200/ request.

 

Some concepts before getting started

To make it easier to understand some concepts and terms, let’s look to a comparative ElasticSearch with a relational database (MySQL) table:

tabela
Another important concept is to understand how it stores your documents. Anyone who has used a NoSQL database before — like MongoDB — will not have problems in understanding it. This mechanism uses the JSON structure, supported by most programming languages.

 

Creating the first records

To better follow the following steps, you can use a REST client of your choice. In this example, we will use a “curl” for simplicity.
Template to insert data into ElasticSearch:


$ curl -X PUT http://localhost:9200/produtos/biclicletas/1 -d '{
"modelo": "speed",
"nome": "Specialized Tarmac",
"marchas": 14,
"cor": "azul",
"tags": [
"bike",
"speed",
"specialized",
"tarmac",
"14 marchas"
],
"valor": 1500000
}'

What we did was this: insert “produtos” in Index and “biclicleta” in Type, added one “biclicleta”, and its id is 1. If you do not provide an Id for ElasticSearch document, it will generate an id for you.

You will have the following response from ElasticSearch:


{
"_index": "produtos",
"_type": "bicicletas",
"_id": "1",
"_version": 1,
"created": true,
"_source": {
"modelo": "speed",
"nome": "Specialized Tarmac",
"marchas": 14,
"cor": "azul",
"tags": [
"bike",
"speed",
"specialized",
"tarmac",
"14 marchas"
],
"valor": 1500000
}
}

If you want to search the record above, simply run the following request:

 

$ curl -XGET http://localhost:9200/produtos/biclicletas/1

 

There are two ways to do queries in ElasticSearch. The simplest is just using query strings in the URL request. We use this format usually for simple and quick searches. The other way is for more complex queries, by sending a JSON with Query DSL. In general, this option is used for more refined results or aggregations among other ElasticSearch features.

Now, suppose you want to search by “bicicletas azuis” (blue bikes in Portuguese) using Query String:

 

$ curl -XGET http://localhost:9200/produtos/biclicletas/_search/?q=cor:azul

 

In the query above, we send the Field color with the desired color – in this case, “azul” (blue). Thus, the following query will search for all bicycles that have blue color. The ElasticSearch will return the following result:


{
"took": 3,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 0.30685282,
"hits": [
{
"_index": "produtos",
"_type": "bicicletas",
"_id": "1",
"_score": 0.30685282,
"_source": {
"modelo": "speed",
"nome": "Specialized Tarmac",
"marchas": 14,
"cor": "azul",
"tags": [
"bike",
"speed",
"specialized",
"tarmac",
"14 marchas"
],
"valor": 1500000
}
}
]
}
}

 

To perform the same search using Query DSL:


$ curl -X GET http://localhost:9200/produtos/bicicletas/ -d '{
"query": {
"match": {
"cor": "azul"
}
}
}'

 

This was only an introduction to ElasticSearch, which includes features such as geolocation and  web analytics. Another interesting point is that the ElasticSearch has several libs for you to use in your favorite programming language. It is worth visiting their website and check it out!

Source: iMasters

Read More

IT Trends and Opportunities in Brazilian Aerospace Sector

Jorge Vicente Lopes da Silva talks about the revolutionary role of 3D printing technology to solve complex problems

Demands, trends and IT opportunities in the supply chain of aerospace industry were the issues discussed at an event held on June 2 in the Technological Park of São José dos Campos, São Paulo State, Brazil. Organized by Softex (Association for Brazilian Software Excellence Promotion), the event “Different markets generate great opportunities” was supported by the Technology Park, CECOMPI and Cluster TIC Vale, which brings together more than 60 companies established in the region, many of them of aerospace sector. Among the speakers, Anderson Borille, from ITA, spoke about trends in manufacturing, especially those envolving the German 4.0 industry and its current opportunities.

Then Jorge Vicente Lopes da Silva, Division Chief of Tri-dimensional Technologies at CTI Renato Archer, spoke about the fundamental role of additive manufacturing in metal(3D printing) for aerospace sector, and the revolutionary role of 3D printing technology to solve logistical problems as well — with parts manufactured on the spot, no longer transported from one location to another — and to create devices of complex geometry. Jorge also presented InVesalius, an open source software for reconstruction of computed tomography and magnetic ressonance images, available for Microsoft Windows, GNU/Linux and Apple Mac OS X platforms.

The event also marked an important moment in Softex. His mentor, Eduardo Garcia, died a week before. “The event ‘Different markets generate great opportunities’ was his idea and he was always very interested in connecting applicants and bidders. In his view, there are various segments and many needs unmet and IT vendors able to meet them and, therefore, we can play an important role connecting both parts”, said Virginia Duarte. His legacy, however, is still alive in the entity.


References:

The Atlas of Economic Complexity: http://atlas.cid.harvard.edu/
Additive Manufactoring: http://additivemanufacturing.com/
German 4.0 Industry [in German]: http://www.plattform-i40.de/
InVesalius: http://www.cti.gov.br/invesalius/
Industry 4.0: http://en.wikipedia.org/wiki/Industry_4.0

Read More

11 Tips and Recommendations When Using APIs

Mauro Pichiliani develops different sorts of software and frequently attends hackathons and tech events in Brazil. During his activities, he usually needs to learn a new API to perform some task. However, in most cases, he ends up with a lack of information needed to use the API documentation properly. Mauro says that “it is actually important to note that only a good documentation does not make an API successful but other factors must be considered.” In this article, Mauro Pichiliani features 11 tips and recommendations for those who want to get the best out of it:

1. Samples all over the place

 

helloworld

 

Many developers prefer to go straight to the code samples and spend little or no time at all studying and learning concepts. This is one of the reasons why it is important to always provide several samples of API usage, which means that we need to go beyond the basic “Hello, World”. A good tip is to provide at least one sample for each API important feature. It is also worth to provide samples in each of the major languages/environments in which the API will be used and highlight how it would behaves in certain situations (bad or no Internet connection, usage limit exceeded etc). It is also worth remembering that the more didactic, objective and realistic those samples may be — and similar to use cases popular in the market — more suitable the samples will be.

 

2. Have a test environment

playground

 

The developer who is starting to use an API will make several tests before putting it into use officially (in production). Because of this behavior, it does makes sense that an API should be used in a test or evaluation environment because then it is possible to “play” and find out what you can be able to do using it. In some cases, especially in APIs that involve the use of monetary values, it is worth investing in trial versions with some type of limited usage by time or feature.

Some APIs require several steps to be used, such as registration, authorization, and token validation. These important steps may discourage some developers to perform a quick test. For these cases, it is recommended to create a “playground” or a ready environment where the developer can get a taste of how the API works without having to go through various environment configuration steps and obtain authorization and authentication features.

3. Documentation beyond the basics

 

documentacao

 

Documentation is a required feature for those who produce an API. There are several types of documentation, but what I search for when I’m learning is something that goes beyond a simple “JavaDoc” or something generated automatically. I look foward particularly for diagrams, architectures and some kind of visual information that helps me to understand the prerequisites, the workflow, the features and the necessary details so that I can do the job using the API. “Getting Started” guidelines are also great starting points for those who access the API website for the first time and have no idea of “where and how” to start using it.

Among the important items of a good documentation is to make clear where the API can and should be used and where it makes sense using it. This is important [for the developer] to know how far it is possible to go with it and what are the scenarios and use cases where it is NOT recommended as well. It is also very useful to highlight the updated list of bugs for each version of the API, for there is nothing more frustrating than wasting so much time trying to discover a bug after it has been fixed in a newer version of API.

4. A good FAQ and RTFM

 

faq

 

I love FAQs (Frequently Asked Questions) when I am learning how to use an API. This format of quick questions and answers helps me a lot to understand the purpose and use of the API features. Not to mention that this way it is easy to find out what other people are trying to do with the API.

However, it is NOT worth to include old or outdated information that do not make sense anymore with obsolete technologies. It is also important to adopt a lighter tone, humorous and didactic when writing the FAQ — regarding the official documentation — as a more friendly content encourages developers to work with the API. On the other hand, try to avoid sarcasm, or treat the developer as “stupid” or the silly “read the manual”, indicating the polite “RTFM”. So try to be gentle.

 

5. Build up a communication channel

DeveloperToDeveloperCommunication

 

A successful API should have some sort of communication channel with the developer. This channel may be an email, Twitter, contact form or any other thing that allows the developer with a technical question to be heard. I emphasize that by indicating the average response time in asynchronous communications (such as an email) helps a lot to have that feeling that the “client” is important and will be heard.

In general, the communication channels for developers should be more simple, short and quick, because whom creates software and goes so far as to contact the developer probably has done several tests and found a barrier that made stand still and call for help. Therefore, communication should be direct and right to the point so that the development is not further delayed.

6. Make it clear prices, type of access and use limitations

 

preco

 

Many APIs rely on a business model that charge for use, according to some kind of list of features, time or number of calls. It is extremely important to make clear what are the values, types of usage (free package, normal or advanced), payment and collection modes, and also how the refund of values works. It is common that APIs are initially free and then reach a certain number of users to become paid. At first, such a business model change can frustrate and scare some developers. In this case, what you want to hear is a clear communication of how things will be once the service becomes payable, including periods, limitations, advantages and disadvantages of how the API service was and how it will be in the new business model.

7. Keep a decent history

Every software project that changes every new version has a history behind. That history is important to show not only the project steps, but also to make it clear which direction it is taking. Several APIs are constantly changing according to the environment, for instance, new browser versions, features to make it faster, changes to support new platforms and general adaptations to new technologies and web development trends. Therefore, knowing how to indicate historical properly is important to provide a context and make clear the reasons why the API is presented today.

 

8. Have a troubleshooting guide

 

male programmer with headset

 

It is very common to find problems and difficulties when studying a new API or when certain environmental conditions may change. At such times, it is important to have a troubleshooting guide that will indicate, step by step, what are the points to be observed.

In the case of APIs, one troubleshooting guide should tell step by step how to perform a connection test, which parameters are required, what kind of return is filed for each request and what to do if any of the steps fail. This type of guide is very useful when debugging a problem and it is not known which of the solution components is failing. In this situation, a diagnostic guide and step by step instructions are extremely useful in helping to first understand what is going on and then taking some action to remedy the problem.

9. Roadmap and new features

 

roadmap

 

One factor developers takes into account when studying and pondering whether it is worth or not to use an API is its continuity. In other words, ask yourself: is it worth to study, invest time and change an application to use the API if it won’t serve me in the future? In other words, no one wants to use a service that looks like an old western town abandoned; so a good way to make clear the API project goes is to provide a roadmap in a chart showing the latest API versions and the planning for the next versions. Thus, the API developer shows its commitment to maintain the service in the future and makes it clear that the project is serious and not just something temporary with an uncertain future.

10. Highlight main projects using the API

 

flexible-woman

 

An API is typically part of a larger software even if this software is just a graphical interface for your use. If the design of an API was well planned and it is something really useful and flexible, it is normal to expect different types of projects making use of it. Highlight and emphasize different projects using the API is a great way to encourage people to use it too and value the service. When I am evaluating an API and I see several projects using it, I have a sense of confidence because if someone is already using this service in some way, it motivates me to use it as well.

11. Foster the community

 

fostercommunity

 

No one wants to be that guest who arrives at the party first and realize later that it was the one who showed up. This means that foster a community is a very important step for developers using the API so they do not feel isolated and lonely. There are several ways to deal with the community, including events like hackathons, programming dojos, meetings where people get along to translate other programming languages or produce documentation, or even social events where community members go out to take some drinks and talk about the project. Regardless the meeting format, it is worth investing in the user community so that will be possbile to improve the service, hear how it is used and get in touch with someone that rely on the service.

Source: iMasters

 

Read More

MySQL Replication in 5 Minutes

mysqlConfigure MySQL replication is extremely simple. This article demonstrates how to create a master in minutes replicating it for a slave. Replication is a native MySQL feature and has multiple uses, such as backup, high availability, redundancy and geographical data distribution, horizontal scalability, among others.

For this simple test, we use Linux and configure replication in MySQL 5.6 between two instances: a master and a slave. We will create two instances MySQL from scratch, that is, without data. They will be on the same machine, but responding in different ports: 3310 and 3311.

The only requirement is to have MySQL 5.6 installed.

» If you have it installed, simply use the path where resides bin/mysqld as basedir in the steps below. For example, in Oracle Linux 7 or RHEL 7 the binary is located in /usr/sbin/mysqld, therefore basedir=/usr;

» If you do not have MySQL 5.6 binaries, just download the tar file and unpack it in a convenient directory that will be your basedir, such as /opt/mysql/mysql-5.6:

# mkdir /opt/mysql
# cd /opt/mysql
# wget http://dev.mysql.com/get/Downloads/MySQL-5.6/mysql-5.6.23-linux-glibc2.5-x86_64.tar.gz
# tar xvzf mysql-5.6.23-linux-glibc2.5-x86_64.tar.gz
# rm mysql-5.6.23-linux-glibc2.5-x86_64.tar.gz
# mv mysql-5.6.23-linux-glibc2.5-x86_64 mysql-5.6

Note: In this case, consider the steps below basedir=/opt/mysql/mysql-5.6; always try to work with the latest versions — replace 5.6.23 in the above commands if a brand new one is available in http://dev.mysql.com/downloads/mysql.

 

Simple replication

Create an instance for the master:

# mkdir /opt/mysql/master /opt/mysql/master/data /opt/mysql/master/tmp
# cd /opt/mysql/master
# nano master.cnf
[client]
port=3310
socket=/opt/mysql/master/tmp/my-master.sock
[mysql]
prompt=master>\\_
[mysqld]
server-id=10
port=3310
basedir=/usr
datadir=/opt/mysql/master/data
socket=/opt/mysql/master/tmp/my-master.sock
log-bin=master-bin.log
innodb_flush_log_at_trx_commit=1
sync_binlog=1
# chown mysql:mysql *
# /usr/bin/mysql_install_db --defaults-file=/opt/mysql/master/master.cnf --user=mysql

Start and test the new instance:

# /usr/bin/mysqld_safe --defaults-file=/opt/mysql/master/master.cnf &
# mysql --defaults-file=/opt/mysql/master/master.cnf -uroot -p
master> SHOW VARIABLES LIKE 'port';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| port | 3310 |
+---------------+-------+

Note: To stop the MySQL process when necessary, make a clean shutdown:

# mysqladmin --defaults-file=/opt/mysql/master/master.cnf -uroot –p shutdown

Open another terminal and create another instance to be the Slave:

# mkdir /opt/mysql/slave /opt/mysql/slave/data /opt/mysql/slave/tmp
# cd /opt/mysql/slave
# nano slave.cnf
[client]
port=3311
socket=/opt/mysql/slave/tmp/my-slave.sock
[mysql]
prompt=slave>\\_
[mysqld]
server-id=11
port=3311
basedir=/usr
datadir=/opt/mysql/slave/data
socket=/opt/mysql/slave/tmp/my-slave.sock
log-bin=slave-bin.log
innodb_flush_log_at_trx_commit=1
sync_binlog=1
# chown mysql:mysql *
# /usr/bin/mysql_install_db --defaults-file=/opt/mysql/slave/slave.cnf --user=mysql
Start and test the new instance:
# /usr/bin/mysqld_safe --defaults-file=/opt/mysql/slave/slave.cnf &
# mysql --defaults-file=/opt/mysql/slave/slave.cnf -uroot -p
slave> SHOW VARIABLES LIKE 'port';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| port | 3311 |
+---------------+-------+

Now that we have two instances with different server-ids and log-bin enabled, create a user in the master instance so the slave can connect to it:

 

master> CREATE USER repl_user@127.0.0.1;
master> GRANT REPLICATION SLAVE ON *.* TO repl_user@127.0.0.1 IDENTIFIED BY 'repl_user_password';

 

Note: In an actual installation, the slave instance will probably be in another host — replace 127.0.0.1 with the host IP where your slave instance is.

Before starting replication, check the Master status:

 

master> SHOW MASTER STATUS \G
*************************** 1. row ***************************
File: master-bin.000003
Position: 433

Use the data status above and initiates replication on the slave:
slave> CHANGE MASTER TO
MASTER_HOST=’127.0.0.1′,
MASTER_PORT=3310,
MASTER_USER=’repl_user’,
MASTER_PASSWORD=’repl_user_password’,
MASTER_LOG_FILE=’master-bin.000003′,
MASTER_LOG_POS=433;
slave> START SLAVE;

Basic test:

master> CREATE DATABASE teste_repl;
master> CREATE TABLE teste_repl.simples (id INT NOT NULL PRIMARY KEY);
master> INSERT INTO teste_repl.simples VALUES (999),(1),(20),(5);
slave> SELECT * FROM teste_repl.simples;

 

References:

http://www.clusterdb.com/mysql-cluster/get-mysql-replication-up-and-running-in-5-minutes
http://dev.mysql.com/doc/refman/5.6/en/binary-installation.html
http://dev.mysql.com/doc/refman/5.6/en/replication-howto.html

Read More

7 Tips to Optimize Up to 300% the Loading Speed in WordPress

The loading speed is an important aspect of a website, both for SEO and user experience. For users, it is frustrating to wait for a website to be loaded for a long time and usually when it takes more than three seconds, the user just leaves the website. Simple as that. For Google, it is not interesting to deliver a website that will probably be abandoned by users. All the webdesign work and content creation will be in vain if users do not have patience to see.

To solve this problem, Eric Platas, webdesigner and SEO expert, highlights seven simple tips to optimize websites in WordPress, but most of the tips apply to any type of website. Try it and you will be able to increase up to 300% loading speed! Platas chose WordPress because, besides being the management platform of most popular websites in the world, is the one with greater support from developer community, which is constantly developing new plugins, tutorials and updates. Depending on the platform you will optimize, you will need to find the plugin or equivalent solution, but the concept behind the tip remains the same.

 

Google PageSpeed Insights

Let’s start with Google PageSpeed Insights because it is the first step to optimize a site and it will allow you to measure the importance of the next steps. Google PageSpeed Insights is a diagnostic tool that measures the performance of your website and suggests ways to improve your loading speed. It does NOT effectively measures the loading speed, but checks the important aspects for performance and suggests improvements, such as enabling the cache, reduce the size of images or place the JavaScript at the end of the code. With Page Speed, you will learn good development practices that will help you write cleaner and packaged code that will reflect not only on the performance of a website, but will also facilitate website maintenance and will have a positive impact on SEO strategies.

 

Pingdom Tools

pingdom-tools
Pingdom Tools, however, is a DNS diagnostic tool that does the opposite of Page Speed ​​Insights. The Pingdom Tools analyzes the HTTP requests made by your site, which is: requests, images, scripts, style sheets and external resources (social widgets, videos, iframes, ajax etc.). The coolest thing is that Pingdom generates a report of all your website files, showing the time the file was requested, how long it took the server to respond, the loading time and when the request is finished. That way, you can identify performance bottlenecks – which, otherwise, you would not notice – such as heavy files, slow server, external scripts (such as Facebook), broken links etc. Another cool feature in Pingdom is that it shows how long it takes to load your website in different parts of the world, because depending on the distance from where your website is hosted, it may load more slowly.

 

Plugin WP Fastest Cache

When it comes to cache plugins for WordPress, W3 Total Cache is the largest reference, being recommended by hosting companies like GoDaddy, Hostgator and Hackspace. But after facing compatibility issues related to JavaScript in some websites, Platas decided to test other plugins. Then he find out a modest plugin, which has been evaluated only a few hundred times, but all starred with 5. Intrigued, he decided to read the description and found out that the concept behind WP Fastest Cache is simple but very efficient. What it does is to save a copy of static HTML pages, eliminating the need for queries on database and large processes on the server. It also has other features, such as gzip, browser cache, and minimizing HTML, JS and CSS.

 

Smush.it

Images correspond to more than a half of the traffic of a website, therefore are one of the best ways to optimize loading speed. Smush.it is a Yahoo! service that reduces the size of images without losing quality. The best of all is that there is also a Smush.it plugin in WordPress that optimizes the images by the time you upload them and also allows to optimize all images that have already been sent.

 

Install the essentials only

WordPress has plugins for just about every need of a website, and that’s good. But some plugins are actual performance villains and even the lighter ones generate some extra processing. For this reason, it is important to pay attention to the installed plugins. A useful tip is: disable the plugins that are not needed at the moment and uninstall those that are not in use for a long time. The major performance villains are the plugins that access external servers, such as Disqus comment system or social sharing bars. These plugins need many scripts and style files to run, and it makes the website slow and cumbersome, especially on 3G connections. Just use Pingdom Tools to identify the plugins that are delaying the loading of your site.

 

Be minimalist

Steve Jobs used to say one thing that I keep as a lesson: “I’m proud of the things I do not do as much of what I do.” That’s why iPod, iPhone and iPad devices have a button. They did not need more than one. Think about it when creating a website. Ask yourself: Do I need to use the image gallery? This layout works only with standard fonts? You will notice that your design decisions will stand out more when you do less, and your websites will be much lighter.

 

CDN

As previously mentioned, the farther your site is hosted, the slower loading. To resolve this problem, there is a service called CDN (Content Delivery Network) which distribute content for servers around the world. Thus, when a user accesses your site, he/she connects to the closest server, making the access much faster. Some CDN will go even further, reducing the size of files and generating cache like Google PageSpeed Service and CloudFlare, and both are free.

Source: iMasters

 

Read More

The Divorce Between PSD and HTML

Fabricio Teixeira, UX Director at R/GA New York, holds the following story. Some people are calling it the “death”, but I preferred to call it “divorce”. They are still alive, healthy and strong. Just do not live together anymore. The PSD to HTML path, which for years was the most accurate — sometimes the only one — in the web design process, appears to be in its last days.

psdtohtml

First you design a page using Photoshop. Impeccable layout, representing exactly how you want the page to behave when accessed in the browser. Afterwards, a frontend developer turns that PSD file into HTML, CSS and Javascript. The assets are then cut, one by one, exported from Photoshop and imported into the code. New tools and plugins are being created in order to facilitate this process and even some companies exist, in the East side of the globe, which charge about US$ 100 to do this process for you.

Nick Pettit, from Tree House blog, believes that this is a process that makes sense at first glance. It can be difficult to start programming the webpage without knowing exactly how designers hope to be the end result. Try Photoshop first and then export to HTML seems a reasonable process. And it ended up dictating much of the structure of the teams within agencies and producers who create for the web.

But turns out that the scenario has changed greatly in recent years. The direction in which the webdesign is walking brings some aspects that make this PSD to HTML process start to be outdated.

Among the main changes:

 

Using CSS

After CSS3, many of the visual effects that were achieved only using Photoshop tools (shadow, rounded edges, gradient, among many others) started to become available on the CSS code itself. Previously, if a box had rounded edges in the layout, the programmer had to export the edges as images and make them fit inch by inch in the HTML. Most modern browsers already support this via CSS. Rare are the sites that need to support older versions of Internet Explorer, for instance — the greatest “villain” of contemporary design websites.

 

Responsive Design

The challenges to website design that runs in all resolutions available in the market are enormous — especially after the emergence of smartphones, tablets and the chaos caused by the lack of standardization of size screen by manufacturers of these devices. The responsive design comes with a very effective solution to this problem. Now back to the example of rounded edges, it is almost impossible to make them fit perfectly on all screen resolutions available. And it is a illusion to think that just draw for the three or four most important breakpoints will do the job.

 

Flat Design

Flat design trends and the reduction of skeuomorphism turn less common the sort of websites that support themselves mostly in visual effects “a la Photoshop”. Interfaces without much shadow, bevel, emboss, allow more and more websites to be designed completely with CSS, using images only for more specific pictures and backgrounds.

 

Market maturity

Over the years, the design industry for the web matured enough. Designers and programmers began to learn what works well and how to avoid what does not. In some companies, the designer is expected to dominate a more accurate knowledge of what is possible with currently available technologies. It is actually crazy to propose solutions if they are not prototyped and tested early in the project.

 

So… Photoshop is dead?

No, Photoshop is still very important for web design. But what happens is that it is going to be used more as a “sketchbook” than as a step in the process. Designers test solutions in Photoshop to determine the harmony of the webpage and display the visual identity for customers and other stakeholders. The layout also serves as a discussion tool for everyone to come to a consensus on what look that the product should have. But it does not make sense anymore to create several layout versions for eight different screen resolutions to pass along to the programmer. It is impossible and inaccurate.

 

Designing in the browser

Brad Frost is one of the programmers who already captured this shift in thinking. He said the best way to design a website in the browser is to make the programmer start the code on the same day the designer start thinking about branding. Nothing like 100% cascade process, in which a small unfeasible to implement something in the browser makes the project walk back.

Below an interview in which he tells a little of the process that is often applied in projects in which he participates:

If you are interested in understanding how this shift in thinking affects the workflow within companies and agencies, I also recommend watching this video on Responsive Design Workflow.

wireframe

And how it affects the UX step?

Wireframes are falling into disuse. In practice, this is nothing more than a symptom of this change in the workflow, where export Photoshop assets to HTML ends up proving little productive for products designed for multiple screens.

Similarly, as the agility required in projects prevents the visual designer to create 20 versions of the same page in multiple screen resolutions, the UX designer also need to optimize your time and avoid creating endless wireframes interrupting the workflow. This new way of thinking about workflow often requires UXers to adapt their process to encourage the design in the browser (or something as closest to it). Alternatives include: collaborative wireframes among all the team, sketches, clickable mockups, UX designers creating prototypes along with the developer and in some cases even learning how to program.

About the divorce mentioned in the beginning of this article, it is actually that kind of marriage that do not work anymore: both have changed a lot in recent years, and each had a different behavior, quite difficult to reconcile the interests.
Source: iMasters

 

Read More

Installing MongoDB on AWS

Installing MongoDB on AWS

Rafael Novello, systems analyst, teaches how to install MongoDB on AWS. In his case, MongoDB install is made on EC2, the Amazon virtual server. In this environment it is possible to provision the speed of disks (called EBS) by configuring IOPS; for MongoDB, the faster the disk, the better! However, an important detail is that you can not configure the disk with many IOPS; instead, you need to pre-warm it!

The pre-warming operation is necessary only during the boot, wether is a new disk or a disk created from an image. Without this preparation, the disc may be 5% to 50% slower (yes, up to 50% slower!) and this was one of the factors that made me suffer with query performance for sure. You can read more about pre-warming checking Amazon documentation, but it consists of umount the disk and run the following command:

sudo dd if=/dev/zero of=/dev/<strong>xvdf</strong> bs=1M

Replace the unit in <strong> by the unit you are using and wait — the command can take few hours to complete, depending on the disk size. A good idea is to use Linux screen to not lose work if the session falls.

Another very important point for MongoDB performance which is independent of the used host is the system ulimit setting. Most Linux systems come configured by default to prevent a user or process consumes too much server resources, but sometimes these limits are too low and interfere with MongoDB performance. You can read more about at 10gen documentation page, but the recommendation is as follows:

file size: unlimited
cpu time: unlimited
virtual memory: unlimited
open files: 64000
memory size: unlimited
processes/threads: 64000

With these two settings I was able to dramatically improve the performance of my MongoDB installation, so I think these tips can help everyone.

 

Hardware allocation

Another point that I realized was very important for MongoDB performance is choosing the right server. When I started to deal with this database, I believed that the disk would be the most important resource and, although not totally wrong, I actually see that the RAM memory is the most important thing here. During database operation, if there is no RAM memory, you will need to take many changes in memory content, increasing disk usage and impairing performance. The recommendation is for at least the indexes fit in memory; you can see that with the stats command on each collection in MongoDB console:

> db.sua_colecao.stats()

The command will show the size of each index in bytes, then just add up and see if it fits in RAM memory. For those who use AWS, it was released a new type of EC2 instance with memory optimization, the R3 instances. They are a great option for MongoDB, as shown in this mongodirector article.

 

MongoDB version

There are several reasons why is always recommended to use the newest systems versions, but in the case of MongoDB there are important issues for our discussion. For instance, the version 2.4 had global locks in the database, which means that any write operation would block the database completely and no other writing could be made. In version 2.6 the locks came out to be in the database level, so it became possible to make writing and reading operations simultaneously in different databases. In version 2.8, according to this MongoDB blog post, locks will be at the document level, which will cause a huge performance impact on the system.

Source: iMasters

Read More

Drawing Particles Using HTML5 Canvas

canvasCanvas is one of the most fun features of HTML5. “The amount of cool things that can be created is absurdly giant. However, many people find it difficult to learn. But the truth is that it is not”, says Raphael Amorim, developer and open source enthusiastic. “Of course, having a good geometry background is very important. But if you do not know much you can create very simple things to go further”. He shows an example below:

In your HTML file create a simple structure and add a tag canvas with a random class. For this article, the class name will be “particles”. Before closing the tag body, call the JavaScript file, which is named “particles.js”.

<canvas class="particles"></canvas>
<script src="particles.js"></script>
</body>

 

Then, in particles.js, let’s start canvas magic! I’ll explain the code by parts for better understanding – The code is available on GitHub. First, add a function to match the onload window event and select the body and the canvas on it, and apply some styles to the elements. Note that there is no CSS file. I chose to set the styles within JavaScript, but you can do it the way you want. We also define the update function, which will run from a certain range.

window.onload = function() {
var body = document.querySelector(‘body’);
body.style.background = ‘#2C2C44';
canvas = document.getElementById(‘particles’),
ctx = canvas.getContext(‘2d’);
body.style.margin = ‘0px’;
canvas.style.margin = ‘0px’;
canvas.style.padding = ‘0px’;
canvas.width = canvas_width;
canvas.height = canvas_height;
draw = setInterval(update, speed);
}

 

After that we define some variables, such as the speed of events and canvas size. But keep it clear that it is not good to use global variables, but this use in this experiment is justified by teaching purposes. In this project there is no use of requestAnimationFrame, but I recommend taking a good look on it. With this resource the browser can optimize simultaneous animations in a single stream and reassemble all cycle, leading to greater animation fidelity. For example, serves very well in synchronized animation with CSS or SVG SMIL transitions.

In addition, animations Javascript-based where a animation loop is running on a tab that is not visible, the browser will not keep it running, which means less use of CPU, GPU and memory, leading to much longer battery life. This link provides a good study source for requestAnimationFrame.

// Settings
var speed = 35,
canvas_width = window.innerWidth,
canvas_height = window.innerHeight;

 

Then there is the definition of other global variables to define a instance in canvas, the number of times that particles have been created, a limit for particles, a list of particles that were created and the colors used. Again, it is not recommended the use of global variables; the computational cost is usually quite high and the application becomes less organized. One of the few cases when applying global scope as variables is advantageous is when the data is constant.

var canvas,
ctx,
times = 0,
limit = 100,
draw,
particles = [],
colors = [‘#F0FD36', ‘#F49FF1', ‘#F53EAC’, ‘#76FBFA’];

If we are creating something that needs randomness in position, size and color of a particle, why not use a function to deliver this data? This is not the best solution, but it is very practical and easy to understand.

var getRand = function(type) {
if (type === ‘size’)
return (Math.floor(Math.random() * 8) * 10)
if (type === ‘color’)
return Math.floor(Math.random() * colors.length)
if (type === ‘pos’)
return [
(Math.floor(Math.random() * 200) * 10),
(Math.floor(Math.random() * 80) * 10)
]
return false
};

Okay, now let’s create a generic function to create particles using as a basis the incoming arguments.

var drawParticle = function(x, y, size, color, opacity){
ctx.beginPath();
ctx.globalAlpha = opacity;
ctx.arc(x, y, size, 0, 2 * Math.PI);
ctx.fillStyle = color;
ctx.fill();
ctx.strokeStyle = color;
ctx.stroke();
}

 

Remember the update function? The one that was set to run in setInterval within the loaded function in the onload event window? In such a function happens the “magic drawing” of the particles, besides controlling the limit of the particles. Note that for each particle designed, it is also saved an object with individual information on the particle in the particle list.

 

function update(args) {
var color = colors[getRand(‘color’)],
pos = getRand(‘pos’),
size = getRand(‘size’),
opacity = 1;
drawParticle(pos[0], pos[1], size, color, opacity)
times++;
particles.push([pos[0], pos[1], color, opacity, size]);
if (times >= limit) {
clearInterval(draw);
draw = setInterval(clean, speed);
}
}

 

So far, the experiment only creates particles on the screen, but when it reaches the limit, it does stop.

There is a function named clean, which is executed when the particle limit is reached within the execution of the update method. In this role it performs the reading of each particle and updates its opacity at a lower value as above, all running for a time period defined above, giving a visual effect of particle fadeOut.

 

function clean() {
ctx.clearRect(0, 0, canvas_width, canvas_height);
particles.forEach(function(p) {
/*
p[0] = x,
p[1] = y,
p[2] = color
p[3] = globalAlpha,
p[4] = size
*/
p[3] = p[3] — 0.06;
drawParticle(p[0], p[1], p[4], p[2], p[3])
if (p[p.length — 1] && p[3] <= 0.0) {
ctx.clearRect(0, 0, canvas_width, canvas_height);
clearInterval(draw);
times = 0;
particles = []
draw = setInterval(update, speed);
}
});
}

Now you can run the experiment in your browser. You will see a simple canvas (you can view it running here too). This code needs refactoring and if you want you can send a Pull Request on GitHub.

Source: iMasters

Read More
content top