LightBlog

mardi 31 janvier 2017

Tensorflow RC 1.0 Released, Android Optimizations Among New Features

Feature Image Displays Picture A in the Style of Famous Paintings B,C, and D – Image Credit: Google Research Blogs

Tensorflow – an open-source neural network platform from the Google Brain team – has made available the release candidate for version 1.0 of its increasingly popular machine learning platform. Some of the most exciting new features include pre-made neural networks for Android cameras (person/object detection as well as artistic style transfer), a Java API, and Accelerated Linear Algebra (XLA) integration – a compiler which aims to lessen resource load and optimize applications for mobile use.

Tübingen Neckarfront, Germany by Andreas Praefcke in the style of "Head of a Clown" by Georges Rouault – Image Credit: Google Research Blogs


Improvements in Python and Java

In this version, Python interaction has been upgraded, adopting some of Python's own syntax and metaphors. Unfortunately, this means that previous Python-based applications of Tensorflow will need to be upgraded to continue functioning in 1.0. Although a conversion script has been released, some scripts still may need to be modified manually. Installing via Python is compatible with MacOS, Linux, and Windows – available as a Pip, Anaconda, or Docker install, among others.

An experimental Java API has also emerged, but for now requires a Linux or MacOS environment to be built from source code.

XLA in Mobile and Beyond

The creators of Tensorflow have long been committed to a universal API, and the implementation of XLA can certainly help. XLA, in essence, utilizes the CPU or GPU to optimize the translation of data from layer to layer of the neural network, resulting in reduced resource load, increased speed, less code, and an overall smaller, more lightweight application. This optimization will also facilitate the porting of server run networks to mobile hardware. Providing the structure, and in some applications, the data to create or use a neural network, Tensorflow's single API can be used for implementation across desktops, servers, or mobile devices – all that's required is a unique backend. While offering great potential, XLA is still experimental and the team behind it has asked explicitly for developer input, so that they can quickly bring better machine learning performance to mobile platforms – they're looking at you Snapdragon 835 fans.

Qualcomm, IBM, Raspberry Pi, Snapchat, and of course Google are a few names on the growing list of companies to either add support for or work closely with Tensorflow and its dedicated team. With this release, the team edges ever-closer to delivering freely implemented neural networks to application developers and consumers alike.


Interested in developing? Check out these links:

Source 1: Tensorflow Source 2: Infoworld



from xda-developers http://ift.tt/2kNI0sh
via IFTTT

Benchmark Cheating Strikes Back: How OnePlus and Others Got Caught Red-Handed, and What They’ve Done About it

A few years ago there was a considerable uproar, when numerous major manufacturers were caught cheating on benchmarks. OEMs of all sizes (including Samsung, HTC, Sony, and LG) took part in this arms race of attempting to fool users without getting caught, but thankfully they eventually stopped their benchmark cheating after some frank discussions with industry experts and journalists.

Back in 2013, it was discovered that the Samsung was artificially boosting its GPU clock speeds in certain applications, sparking a series of investigations into benchmark cheating across the whole range of manufacturers. At the time, the investigation found that almost every manufacturer except for Google/Motorola were engaging in benchmark cheating. They were all investing time and money into attempts to eek a little bit extra performance out of their phones in benchmarks, in ways that wouldn't have any positive effect on everyday usage, in an attempt to fool users into thinking that their phones were faster than they actually were. These development efforts ran the whole gamut, from setting clock speed floors, to forcing the clock speeds to their maximum settings, to even creating special higher power states and special clock speeds that were only available when benchmarking, with these efforts often resulting in just a couple percentage point increases in benchmark.

There was substantial outrage when it was discovered, as these attempts at benchmark cheating ran counter to the very point of the benchmarks themselves. Most benchmarks aren't there to tell you the theoretical maximum performance of a phone in lab conditions that aren't reproducible in day to day use, but rather they are there to give you a point of reference for real world comparisons between phones. After a bit of a public berating (and some private conversations) from technology publications, industry leaders, and the general public, most manufacturers got the message that benchmark cheating was simply not acceptable, and stopped as a result. Most of the few that didn't stop at that point stopped soon after, as there were substantial changes made to how many benchmarks run, in an attempt to discourage benchmark cheating (by reducing the benefit from it). Many benchmarks were made longer so that the thermal throttling from maximizing clock speeds would become immediately apparent.

When we interviewed John Poole, the creator of Geekbench, the topic of benchmark cheating and what companies like Primate Labs can do to prevent it came up. Primate Labs in particular made Geekbench 4 quite a bit longer than Geekbench 3, in part to reduce the effects of benchmark cheating. Reducing the benefits in order to ensure that the development costs of benchmark cheating aren't worth it.

"The problem is that once we have these large runtimes if you start gaming things by ramping up your clock speeds or disabling governors or something like that, you're going to start putting actual real danger in the phone. … If you're going to game it … you won't get as much out of it. You might still get a couple percent, but is it really worth it?" – John Poole


What Happened

Unfortunately, we must report that some OEMs have started cheating again, meaning we should be on the lookout once more. Thankfully, manufacturers have become increasingly responsive to issues like this, and with the right attention being drawn to it, this can be fixed quickly. It is a bit shocking to see manufacturers implementing benchmark cheating in light of how bad the backlash was last time it was attempted (with some benchmarks completely excluding cheating devices from their performance lists). With that backlash contrasting against how tiny the performance gains from benchmark cheating typically are (with most of the attempts resulting in less than a 5% score increase last time), we had truly hoped that this would all be behind us.

The timing of this attempt is especially inopportune, as a couple months ago benchmark cheating left the world of being purely an enthusiast concern, and entered the public sphere when Volkswagen and Fiat Chrysler were both caught cheating on their emissions benchmarks. Both companies implemented software to detect when their diesel cars were being put through emissions testing, and had them switch into a low emissions mode that saw their fuel economy drop, in an attempt to compete with gasoline cars in fuel efficiency while still staying within regulatory limits for emissions tests. So far the scandal has resulted in billions in fines, tens of billions of recall costs, and charges being laid — certainly not the kind of retribution OEMs would ever see for inflating their benchmark scores, which are purely for user comparisons and are not used for measuring any regulatory requirements.

While investigating how Qualcomm achieves faster app opening speeds on the then-new Qualcomm Snapdragon 821, we noticed something strange on the OnePlus 3T that we could not reproduce on the Xiaomi Mi Note 2 or the Google Pixel XL, among other Snapdragon 821 devices. Our editor-in-chief, Mario Serrafero, was using Qualcomm Trepn and the Snapdragon Performance Visualizer to monitor how Qualcomm "boosts" the CPU clock speed when opening apps, and noticed that certain apps on the OnePlus 3T were not falling back down to their normal idling speeds after opening. As a general rule of thumb, we avoid testing benchmarks with performance monitoring tools open whenever possible due to the additional performance overhead that they bring (particularly in non-Snapdragon devices where the are no official desktop tools), however in this incident they helped us notice some strange behavior that we likely would have missed otherwise.

When entering certain benchmarking apps, the OnePlus 3T's cores would stay above 0.98 GHz for the little cores and 1.29 GHz for the big cores, even when the CPU load dropped to 0%. This is quite strange, as normally both sets of cores drop down to 0.31 GHz on the OnePlus 3T when there is no load. Upon first seeing this we were worried that OnePlus' CPU scaling was simply set a bit strangely, however upon further testing we came to the conclusion that OnePlus must be targeting specific applications. Our hypothesis was that OnePlus was targeting these benchmarks by name, and was entering an alternate CPU scaling mode to pump up their benchmark scores. One of our main concerns was that OnePlus was possibly setting looser thermal restrictions in this mode in order avoid the problems they had with the OnePlus One, OnePlus X, and OnePlus 2, where the phones were handling the additional cores coming online for the multi-core section of Geekbench poorly, and occasionally throttling down substantially as a result (to the point where the OnePlus X sometimes scored lower in the multi core section than in the single core section). You can find heavy throttling in our OnePlus 2 review, where we found the device could shed off up to 50% of its Geekbench 3 multi core score. Later, when we began comparing throttling and thermals across devices, the OnePlus 2 became a textbook example of what OEMs should avoid.

We reached out to the team at Primate Labs (the creators of Geekbench), who were instrumental in exposing the first wave of benchmark cheating, and partnered with them for further testing. We brought a OnePlus 3T to Primate Labs' office in Toronto for some initial analysis. The initial testing included a ROM dump which found that the OnePlus 3T was directly looking for quite a few apps by name. Most notably, the OnePlus 3T was looking for Geekbench, AnTuTu, Androbench, Quadrant, Vellamo, and GFXBench. As by this point we had fairly clear evidence that OnePlus was engaging in benchmark cheating, Primate Labs built a "Bob's Mini Golf Putt" version of Geekbench 4 for us. Thanks to the substantial changes between Geekbench 3 and 4, the "Mini Golf" version had to be rebuilt from the ground up specifically for this testing. This version of Geekbench 4 is designed to avoid any benchmark detection, in order to allow Geekbench to run as a normal application on phones that are cheating (going beyond the package renaming that fools most attempts at benchmark cheating).


A Surprising Example

Immediately upon opening the app, the difference was clear. The OnePlus 3T was idling at 0.31 GHz, the way it does in most apps, rather than at 1.29 GHz for the big cores and 0.98 GHz for the little cores like it does in the regular Geekbench app. OnePlus was makings it CPU governor more aggressive, resulting in a practical artificial clock speed floor in Geekbench that wasn't there in the hidden Geekbench build. It wasn't based on the CPU workload, but rather on the app's package name, which the hidden build could fool. While the difference in individual runs was minimal, the thermal throttling relaxations shine in our sustained performance test, shown below.

 

 

From our testing, it appears that this has been a "feature" of Hydrogen OS for quite a while now, and was not added to Oxygen OS until the community builds leading up to the Nougat release (after the two ROMs were merged). It is a bit disappointing to see, especially in light of the software problems that OnePlus has had this month following the merging of the ROMs, from bootloader vulnerabilities to GPL compliance issues. We are hopeful that as the dust settles down following the merger of the two teams, OnePlus will return to form, and continue to position themselves as a developer-friendly option.

With the "Mini Golf" version of Geekbench in hand, we went out and started testing other phones for benchmark cheating as well. Thankfully our testing shows no cheating by the companies which were involved in the scandal half a decade ago. HTC, Xiaomi, Huawei, Honor, Google, Sony, and others appear to have consistent scores between the regular Geekbench build and the "Mini Golf" build on our testing devices.

Unfortunately, we did find possible evidence of benchmark cheating that we have not yet been able to confirm from a couple other companies, which we will be investigating further. The very worst example of this was in the Exynos 8890-powered Meizu Pro 6 Plus, which took the benchmark cheating to another extreme.


A Terrible Example

Meizu has historically set their CPU scaling extremely conservatively. Notably, they often set their phones up so that the big cores rarely come online, even when in their "performance mode", making the flagship processors (like the excellent Exynos 8890) that they put into their flagship phones act like midrange processors. This came to a head last year when Anandtech called Meizu out for their poor performance on Anandtech's JavaScript benchmarks on the Mediatek Helio X25 based Meizu Pro 6, and noted that the big cores stayed offline for most of the test (when the test should have been running nearly exclusively on the big cores). Anandtech noticed last week that a software update had been pushed to the Meizu Pro 6 that was finally allowing the Meizu to use those cores to their fullest. Anandtech's Smartphone Senior Editor, Matt Humrick, remarked that "After updating to Flyme OS 5.2.5.0G, the PRO 6 performs substantially better. The Kraken, WebXPRT 2015, and JetStream scores improve by about 2x-2.5x. Meizu apparently adjusted the load threshold value, allowing threads to migrate to the A72 cores more frequently for better performance."

Unfortunately, it appears that rather than improving the CPU scaling for their new devices to obtain better benchmark scores, they appear to have set the phone to switch to using the big cores when certain apps are running.

Upon opening a benchmarking app, our Meizu Pro 6 Plus recommends that you switch into "Performance Mode" (which alone is enough to confirm that they are looking for specific package names), and it seems to make a substantial difference. When in the standard "Balance Mode", the phone consistently scores around 604 and 2220 on Geekbench's single-core and multi-core sections, but in "Performance Mode" it scores 1473 and 3906, largely thanks to the big cores staying off for most of the test in "Balance Mode", and turning on in "Performance Mode". Meizu appears to lock the little cores to their maximum speed of 1.48 GHz, and set a hard floor for two of their big cores of 1.46 GHz when running Geekbench while in "Performance Mode" (with the other two big cores being allowed to scale freely, and quite aggressively), which we do not see when running the "Mini Golf" build.

While being able to choose between a high power mode and a low power mode can be a nice feature, in this case it appears to be nothing more than a parlor trick. The Meizu Pro 6 Plus sees decent scores in "Performance Mode" for the regular Geekbench app, but when using the "Mini Golf" build of Geekbench, it drops right back down the the same level of performance as it has when it is set to "Balance Mode". The higher performance state on the Meizu Pro 6 Plus is just for benchmarking, not for actual day to day use.

One thing of note is that when we tested the Meizu Pro 6 Plus in "Performance Mode" with the secret build of Geekbench, the big cores came online if we were recording the clock speeds with Qualcomm Trepn. We have not yet determined if the Meizu is recognizing that Trepn is running and turning on the big cores in part because of it, or if it simply is turning on the big cores because of the extra CPU load that it creates. While it might sound counter-intuitive that an additional load in the background (such as when we kept performance graphs on during the test) would increase the results of a benchmark, Meizu's conservative scaling could mean that the extra overhead was enough to push it over the edge, and call the big cores into action, thus improving performance for all tasks.


When receptive OEMs address feedback…

Following our testing, we reached out to OnePlus about the issues we found. In response, OnePlus swiftly promised to stop targeting benchmarking apps with their benchmark cheating, but still intend to keep it for games (which also get benchmarked). In a future build of OxygenOS, this mechanism will not be triggered by benchmarks. OnePlus has been receptive of our suggestion to add a toggle as well, so that users know what is going on under the hood, and at the very least the unfair and misleading advantage in benchmarks should be corrected. Due to the Chinese New Year holiday and their feature backlog, though, it might be a while before we see user-facing customization options for this performance feature. While correcting the behavior alone is an improvement, it is still a bit disappointing to see in regular applications (like games), as it is a crutch to target specific apps, instead of improving actual performance scaling. By artificially boosting the aggressiveness of the processor, and thus the clock speeds for specific apps instead of improving their phones ability to identify when it actually needs higher clock speeds, OnePlus creates inconsistent performance for their phones, which will only become more apparent as the phone gets older and more games that OnePlus hasn't targeted are released. However, the implementation does currently allow games to perform better. OnePlus also provided a statement for this article, which you can read below:

 'In order to give users a better user experience in resource intensive apps and games, especially graphically intensive ones, we implemented certain mechanisms in the community and Nougat builds to trigger the processor to run more aggressively. The trigger process for benchmarking apps will not be present in upcoming OxygenOS builds on the OnePlus 3 and OnePlus 3T.'

We are pleased to hear that OnePlus will be removing the benchmark cheating from their phones. Going forward we will continue to attempt to pressure OEMs to be more consumer friendly whenever possible, and will be keeping an eye out for future benchmark cheating.

Unfortunately, the only real answer to this type of deceit is constant vigilance. As the smartphone enthusiast community, we need to keep our eyes out for attempts to deceive users like this. It is not the benchmark scores themselves that we are interested in, but rather what the benchmarks say about the phone's performance. While the benchmark cheating was not yet active on the OnePlus 3 when we reviewed it,  a simple software update was enough to add this misleading "feature", and clearly illustrates that checking the devices for benchmark cheating when they first launch is not enough. Issues like this one can be added days, weeks, months, or even years after the device launches, artificially inflating the global averages gather by benchmarks months down the line, influencing the final database result. It should be noted that even with these tweaks that manufacturers had to invest time and money to develop, we are typically only seeing a couple percentage points increase in benchmark scores (excluding a couple fringe cases like Meizu, where the cheating is covering up much larger problems). A couple percentage points, which is much smaller than the gap between the best performing and worst performing devices. We'd argue, though, that with devices running increasingly similar hardware, those extra percentage points might be the deciding factor in the ranking charts that users ultimately look up. Better driver optimization and smarter CPU scaling can have an absolutely massive effect on device performance, with the difference between the score of the top performing Qualcomm Snapdragon 820 based device and the worst performing one (from a major OEM) exceeding 20% on Geekbench. Twenty percent from driver optimization, rather than a couple percentage points from spending time and money to deceive your users. And that's just talking about the development efforts that can affect benchmark scores. Many of the biggest benefits of investing in improving a device's software don't always show up on benchmarks, with OnePlus offering excellent real world performance in their devices. It really should be clear cut where a company's development efforts should be focused in this case. We are reaching out to more companies who cheat on benchmarks as we find them, and we hope they are every bit as receptive as OnePlus.


We would like to thank the team at Primate Labs once again for working with us to uncover this issue. It would have been substantially more difficult to properly test for Benchmark Cheating without the "Mini Golf" edition of Geekbench.



from xda-developers http://ift.tt/2jRGZRr
via IFTTT

Project Fi is Rumored to Integrate with Google Voice Soon

To kick off last week with a bang, Google finally started rolling out a big update to an application they've been ignoring for the longest time – Google Voice. This seems to be the latest step in Google's overarching goal of moving regular users away from their Hangouts application. With Hangouts shifting towards becoming an enterprise-first service, the millions of people who use Hangouts will need an alternative. For most, they can move over to applications like Allo and Duo, but this leaves many Project Fi users in the dark.

As of right now, Google doesn't pre-install Hangouts on the Pixel or the Pixel XL unless you buy either through Project Fi. Many Project Fi customers are currently using Hangouts for certain day to day features, such as synchronized messaging on multiple platforms. When Google completes the shift of Hangouts to the enterprise market, they will need a solution for current Project Fi users. A new rumor from 9to5Google claims that Project Fi may be integrated into Google Voice in the near future.

While Google has yet to confirm this integration, they do seem to hint at some sort of solution. They know that a lot of Project Fi customers rely on Hangouts and have told 9to5Google that they should  "continue to use Hangouts" while Google is actively "working on a solution." They don't mention Google Voice being the solution, but sources close to 9to5Google are telling them that Project Fi integration will be a "keystone" feature with the new Google Voice update.

9to5Google trust this source as they have been correct with rumors in the past. Their source is the same person who previously told 9to5Google that VoIP integration would be coming to Google Voice in the future (which Google then confirmed). Still, as with most rumors, we should take this one with a grain of salt, but we could see things unfold this way since the information does come from what is said to be a reliable source.


Source: 9to5Google



from xda-developers http://ift.tt/2knmGg0
via IFTTT

ZTE Admits Kickstarter isn’t the Place to Sell Project CSX/Hawkeye

Late last year, ZTE decided they wanted to sell a smartphone with ideas which were crowdsourced from the community. This device was given the name Project CSX and with it they wanted the community to decide everything that went into the phone.

The main rules when launching this project were it needing to be a mobile product, it needed to be affordable, and it need to be technically possible for a launch in 2017. Naturally, this turned into a project for a smartphone and the community was given 5 different features to vote on.

As fans of stock Android, many of us here at XDA wanted this option to win out when we wrote about it in October of last year. The other choices here were eye-tracking and self-adhesion, a glove accessory that would be powered by Android, intelligent cases for phones like a gamepad, stylus or e-ink flip cover and lastly a VR-interactive diving mask. Many of these were way out of the left field and would obviously take years for a company to perfect before bringing it to market.

So the idea of a phone with eye-tracking and self-adhesion eventually won out (don't ask us how) and this is how we ended up with the phone that would soon be named Hawkeye. It was less than two weeks ago when they launched their Kickstarter campaign for they ZTE Hawkeye, and we learned what type of hardware they chose for the device. Many were upset that after such a long buildup, they ended up choosing the Snapdragon 625 SoC for the device.

In a new interview with Android Central, ZTE's Vice President of Technology Partnerships and Planning for ZTE North America admits they should have approached this idea differently. Jeff Yee believes they would have been more granular with the idea and asked the community if they wanted the Snapdragon 835 or the 625, then asked if they wanted the fingerprint scanner on the front or the back. The Kickstarter campaign has raised less than $40,000 (out of the $500,000 they asked for), and tells us they are considering cancelling the project altogether.

Mr Yee says if they do end up cancelling it, we could see the eye-tracking and self-adhesion feature appear in a future ZTE flagship (please, please don't ruin the Axon line with this).

Source: Android Central



from xda-developers http://ift.tt/2kLPMTr
via IFTTT

Updated LineageOS 13.0 for the NVIDIA SHIELD Tablet

If you have a NVIDIA SHIELD Tablet, check out this LineageOS build based on Android 6.0 with the latest Lineage OS commits! Head on over to the forum post for the download link!



from xda-developers http://ift.tt/2kNJmX9
via IFTTT

Google Announces Android Nougat 7.1.2, Public Beta Starts Rolling Out For Pixel and Nexus Devices

In an official blog post earlier, Google has announced the beta version of the upcoming Android Maintenance Release: Android Nougat 7.1.2. The public beta update will start rolling out starting today for supported devices. As always, the update will only be rolled out to devices which are enrolled in Android Beta Program.

The supported devices include the Google Pixel, Pixel XL, Nexus 5X, Nexus Player, and the Pixel C. Unfortunately, the Nexus 6 and Nexus 9 won't be receiving the Android Nougat 7.1.2 as confirmed by Google earlier. This shouldn't be a surprise at all since both the devices have already passed the 2-year software support period a while back. Although, both will continue to receive the monthly security patches for one more year.

Google says the update is focused on refinements and it includes a lot of bugfixes as well as many under-the-hood optimizations. In case you have previously enrolled your device in the program, you don't require to do anything at all; the update will automatically be rolled out to your device in next few days. If not, enroll your eligible device in Android Beta Program here. Alternatively, you can also update your device manually by grabbing the factory images for your device from here.

The update changelog has not yet been disclosed by Google, though a release note posted on the Android Developers site for the Android 7.1.2 update outlines some of the known bugs in the beta update. That include Quick Settings issue on the Pixel C, occasional UI hangs, WiFi stability issues, screen turning black during the transition from boot animation to setup wizard and more.

As for when we will see the final release, Google says they're expecting to release final build of Android 7.1.2 in "next couple of months."

Source: Android Developers Blog



from xda-developers http://ift.tt/2jqPpAF
via IFTTT

Nextbit has Officially been Acquired by Razer

You may know of Razer as the company who sells high-end PCs and PC accessories, but they got into the Android business in 2015 and even relaunched the OUYA store as Cortex for the Razer Forge TV. They haven't been very active in the community since then, but it seems they aren't done with the Android ecosystem either. Yesterday, it was announced that Razer had acquired both the assets and the entire 30-person team behind the Nextbit Robin.

So, many are asking what this means for Nextbit's flagship smartphone. Nextbit has confirmed they are no longer selling the Robin and its accessories through their official channels (although you can still buy it from Amazon as I write this up). Any remaining units being sold right now are part of the company's last batch of devices and there will not be any additional ones manufactured. Nextbit has also announced what this means for customers who currently own the Robin.

Nextbit CEO Tom Moss says they will continue to offer hardware support for the Nextbit Robin for 6 more months (for warranties and such). On top of that, current customers can expect to receive software updates (both new Android OS updates and security patches) for the next 12 months. After that though, Nextbit will no longer be working on the Robin. Instead, they are said to be working as an "independent division inside Razer," and will be focused on "unique mobile design and experiences."

While this is definitely bad news for those who wanted to see Nextbit succeed, it's certainly a better outcome than other technology companies faced when they had to sell their assets. We'll have to wait and see if the team continues to work on mobile hardware, or if they will be focused on integrating their cloud technology into current and future Razer products.

Source: Nextbit



from xda-developers http://ift.tt/2jRbqoJ
via IFTTT

lundi 30 janvier 2017

App Fixes the Quick Settings Flashlight Tile for the Redmi Note 3 Pro (Kenzo)

Some people with the Redmi Note 3 Pro have been having trouble with their Flashlight tile within the Quick Settings panel. XDA Recognized Themer Umang96 has recently released a root app with the help of XDA Junior Member shayanism and XDA Member hichaam.



from xda-developers http://ift.tt/2jMGpEu
via IFTTT

XDA-Developers Invites Your Ideas for the Google Summer of Code Program!

XDA-Developers was founded on the need to work with closed source software, often in an effort to fix what the manufacturer broke or intentionally disabled on their smart devices. Since then, we have evolved to place more and more emphasis on open source projects. Open sourced software is much easier for developers to work with and is a great starting place for beginning developers to learn how to code.

With that in mind, we are proud to announce that XDA-Developers is applying to become a Mentor Organization for Google Summer of Code (GSoC).


What is Google Summer of Code?

Google Summer of Code is a global program focused on introducing student developers to open source software. Students apply and work on a 3 month open source project with a mentor organization during their (summer) break from U.S.-based university schedule. Student participants (minimum of 18 years old and who have completed secondary school) are paired with a mentor from participating organizations, which allows them to gain exposure to real world software development.

What's more, the incentive for students participating in this program is not just hands-on experience working on a project – you also get paid to contribute to open source projects!

You can learn more about Google Summer of Code over on its official page here. Additionally, you can view previous GSoC projects over here to get an idea of what kinds of projects other students have worked on in the past. Finally, you can also refer to the 2016 GSoC Archive and FLOSS Manuals.

XDA-Developers as a Mentor Organization

For the 2017 Google Summer of Code, XDA-Developers is applying to be a Mentor Organization for the very first time. We believe that open source code is the future of mobile software development. Our participation in the Google Summer of Code program is a small way of promoting the advantages of open source projects and implanting a love of open source development in the next generation of developers.

The first step in our application process is to invite project ideas from you as a community. These are ideas for projects that can be completed in about 12 weeks of coding. Anyone can submit a feasible idea, but we would really recommend taking a look at previous projects to get a good grasp on what the GSoC typically expects.

Here is the requirement set for the ideas:

  1. A project title/description
  2. More detailed description of the project (2-5 sentences)
  3. Expected outcomes
  4. Skills required/preferred
  5. Possible mentors at XDA (can be the idea submittor but not a student)
  6. If possible, an easy, medium or hard rating for the project
  7. Your name and/or XDA username

All ideas are to be submitted over at our GitHub page.

The next step in our process is to build a team of Mentors. If you are able and willing, you can also apply to become a Mentor for your idea. The position requires committal to being a guidance to a budding student and building one-one-one rapport, so any developers interested in fostering a love of open source projects may apply for this position.


So, does the idea of being a mentor or working with one strike your fancy? Make sure you apply in that case. Also, let us know your thoughts in the comments below!



from xda-developers http://ift.tt/2kkZcI2
via IFTTT

Mod Enables the Samsung Gear Application for Non-Samsung Devices

XDA Recognized Developer j to the 4n noticed the Samsung Gear application didn't work on the HTC 10, as well as some other devices. So they whipped up a mod into an APK & has been told by at least 4 others that it helped fix the issue



from xda-developers http://ift.tt/2jOHKbw
via IFTTT

App for OnePlus 3 Gives you Better Control of Pocket Mode

XDA Senior Member rituj26 was tired of some OnePlus 3 ROMs only disabling gestures while others only disabled the fingerprint sensor. So they created this little root application that gives you the ability to toggle specific features when Pocket Mode is enabled.



from xda-developers http://ift.tt/2jM267R
via IFTTT

Opinion: The OnePlus 3 and Other 2016 Devices Stand to Benefit from a Held Up & Held Back Snapdragon 835

The recent wave of reports regarding the fate of Snapdragon 835 devices seem to point at a slight delay in the arrival of Qualcomm's latest and greatest. Furthermore, we now know a few big devices coming our way at MWC are launching with last year's Snapdragon 821.

There is nothing wrong with the Snapdragon 821 — in fact, it was big offering from Qualcomm, a great step up from the flawed 810, and ultimately a good option for all kinds of OEMs, many of which accomplished great things in the realms of camera quality, performance and even battery life with this chipset. We'd argue that more options for OEMs to choose from is always a good thing, though, and we are certainly concerned about Qualcomm once holding a generation back again should the Snapdragon 835 arrive too late, on fewer devices or perform worse than expected. These aren't unfounded concerns, and early figures of the processor's performance improvements don't suggest a year-on-year jump as prominent as what we are accustomed to, nor the kind of generational leaps we'd love to have. While 20 to 25 percent faster graphics and CPU performance is nothing to scoff at, the situation becomes a lot tougher for Qualcomm when you factor in the lead that A72-based processors and Apple chipsets already had in the CPU department, as well as the fact that A73 core chipsets and newer Mali GPUs are already being adopted by chipset makers like HiSilicon.

Suggested Reading: A Widening Gap: The A10 Fusion Puts a Chokehold on Qualcomm's Prospects

Furthermore, these performance improvements are notably lower than what we expected in previous years. With Adreno GPUs, for example (and going by official percentages from Qualcomm), the Snapdragon 805's Adreno 420 was reported to be 40% faster than preceding GPUs in the Snapdragon 800 and 801. The Adreno 430 in the Snapdragon 810 further boosted speed by 30%, making for a strongpoint of the 810 in spite of the thermal constraints. Finally, the Adreno 530 offers up to 40% better graphics performance over the 810's GPU — while all of these proportional increases don't always translate directly into benchmark results,

Qualcomm remained at the top of the graphics game on mobile through its steadfast Adreno portfolio. This year, Qualcomm's GPU jumps 25%, making for the smallest figure they themselves shared over the years (however, I'd argue many circumventing mentioned below do make up for it). The advances in CPU performance follow a similar pattern, with the latest CPU increase settling for around 25% as well, despite the move to semi-custom cores (unclear whether the base is A72 or A73) and the 30% reduction in area enabled by the jump to 10nm, with area-efficiency greatly contributing to performance and power savings of 40%.

The fact that Qualcomm's Snapdragon 835 might come a bit later than usual and that it might not be as big a leap as previous iterations have been, though, bring a bittersweet conclusions to current smartphone owners: their devices are slightly more future-proof, as the race for faster processors takes a short rest and picks up at a slower pace. Of course, other companies using non-Qualcomm chipsets will reap the benefit – and either catch up or get further ahead – but most OEMs are currently limited to Qualcomm chipsets for their flagship devices. In other words, their devices will have a couple of extra months before an iteration with a better processor arrives, and that increment won't be as drastic as they have been in previous years. The HTC U Ultra is selling with a Snapdragon 821 inside it, and there's reason to believe the LG G6 will as well — we know that devices coming at MWC are not arriving with a Snapdragon 835, or at least not going on sale with one until April (it's rumored that Sony's devices will indeed pack a Snapdragon 835, while being announced at MWC). I've confirmed with my sources and it does seem that actual production will not start until after March for many of these Snapdragon 835 flagships, with Samsung being first in line. This has the awkward consequence of making companies like HTC and LG essentially launch both of their early 2015 and 2016 with practically the same chipset — in LG's case, its last 3 flagships in its two biggest lines will have near identical computational ability. If you are an LG V20 owner and Android enthusiast, however, you have less of a reason to upgrade and thus little reason to fret! (While I wouldn't normally expect a V line owner to specifically go after a G line device, given the traditional differences between both, with a larger screen on the G6 both lineups could be converging very much like the Galaxy Edge and Note devices ended up satisfying a very similar set of users.) 

While the Snapdragon 821 fell behind in terms of raw CPU prowess, it kept a healthy lead in GPU performance through the sheer strength of the Andreo 530, a department which Qualcomm has yet to fully surrender to other chipset makers in the Android space, even with ARM's Bitfrost architecture in the excellent Mali G71. If we compare the transition from 2015 to 2016, we find that many users actually had a reason to actively go out of their way and upgrade, given the thermal constraints and efficiency limitations of the Snapdragon 810, which ultimately impacted every device it resided in (some less than others, such as the still-excellent Nexus 6P) with worse performance, particularly frustrating sustained performance, uncomfortable heat and in some cases, disappointing battery life.  There is much less of a reason to upgrade to a device running a 2017 Qualcomm chipset than there was for 2015 flagship owners in 2016, that's for sure. So if you bought a new phone in early or mid 2016 in particular, you do see a sort of additional time window to your bleeding-edge status, especially if your choice was a Q1 or Q2 HTC or LG device. The phone I believe benefits the most from this, though, is the OnePlus 3 (and to a similar extent, the OnePlus 3T).


2017 Flagship Killer? No, but closer

OnePlus has made noise with its "Never Settle" slogan since its inception, and one could argue that the OnePlus One was fully deserving of such marketing — it did pack tremendous specs for its time, at a much cheaper price than the competition. Back then, affordable flagships were just starting to emerge and to gain notoriety in the West. OnePlus managed to ride that wave and deliver a solid, affordable and powerful package that many developers and XDA users still love to this day. It's surprising and telling how many OnePlus One users still roam our forums, how development lasted through multiple releases, and how well the phone holds up today. The OnePlus 2 was a different story, however — it was one of the worst exponents of the Snapdragon 810, with inconsistent performance, throttling and artificial workarounds that remind us of current practices. It laughably proved its own marketing slogan wrong, as the "2016 flagship killer" struggled to offer a better experience than 2014 phones.

The OnePlus 3 fixed that, and it not only offered a similar processing package to other phones of 2016, it actually arguably beat most of them by not skimping on any component and intelligently using software for an extra advantage. The OnePlus 3 came out with the Snapdragon 820 and 6GB of DDR4 RAM, whereas every other flagship from well-known manufacturers still opted for 4GB of RAM. Sure, at the time of release there was no point in having that much RAM, but software updates did give OnePlus 3 owners better RAM management, and you can still get the most out of it down the road by modifying the software. It's a small thing, but certainly a specification that OnePlus can claim it had over 2016 flagships, and still has even over early 2017 flagships (at least the HTC U Ultra, which opted for 4GB of RAM).

Moreover, the phone still uses a combination of UFS 2.0 with F2FS on the newer builds of OxygenOS, increasing read and write speeds and impacting real-world performance in the form of better app and game-opening speeds. This is worth pointing out because not all 2016 flagships have this kind of powerful storage, and few of those that do are set up on F2FS. We've detailed just how big of a difference this is, and how it ties with other decisions OnePlus made into delivering an extremely speedy phone with the OnePlus 3 and OnePlus 3T.

With the Snapdragon 835 being in the situation it's in, the OnePlus 3 (and 3T) look even more attractive on paper, and ironically enough, even more worthy of the slogans OnePlus has used to market its previous phones. While we can all agree that the OnePlus 2's use of "2016 flagship killer" as an advertising catch-phrase was ridiculous and completely unfounded, the OnePlus 3's processing package is made even more futureproof by the circumstances of the mobile silicon market. It topped benchmark charts at the time of release and it demonstrably outperforms most other phones in real-world scenarios — and this is running OxygenOS as OnePlus intended, instead of all the options that XDA users are accustomed to through mods, custom kernels and ROMs, governor tweaks, and much more.

In this sense, the OnePlus 3 is extremely future-proof, and the non-T variant in particular stands out as a device that sold for not only half the price of 2016 flagships, but also of many 2017 flagships while still offering the same performance, or a delta that's smaller than years prior (once 835 flagships roll out). All 2016 phones stand to benefit from the current situation regarding smartphone processors, but the one that has the most going for it in terms of the best processing package for the longest time and for the least amount of money is, in my opinion, the original OnePlus 3.


Final Editor's Note: I personally believe that the Snapdragon 835 is a healthy upgrade over the Snapdragon 820 and 821, and that the quality of the chipset cannot and should not be measured merely over the performance improvements announced by the chipset maker or revealed by benchmarks. Qualcomm's chipsets in particular offer a ton of features that don't make it into charts and spreadsheets, from the Qualcomm-enabled TouchBoost and its app opening speed tweaks to the many peripherals and useful functionality that come with the Hexagon DSP, their Aqstic codec, Quick Charge, and now support for TensorFlow for on-chip machine learning support, VR optimizations, Q-Sync and more. The 835 is also designed with power efficiency in mind, focusing on using the low-power cores for up to 80 percent of normal smartphone workloads. When it comes to raw performance and benchmark scores, though, I don't expect the 835 to blow anyone away, and I wouldn't be surprised given how little Qualcomm focused on performance in both the pre-briefing session and the launch event. We'll take an integral look at the Snapdragon 835 when we can get our hands on actual devices, putting them through our performance analysis, and we'll go beyond benchmarks to analyze and quantify its additional benefits as well.



from xda-developers http://ift.tt/2kki1ev
via IFTTT

The Chromecast Ethernet Adapter Works with Google Home

A couple of weeks after the Google Home personal assistant device was announced last year at Google I/O 2016, it was reported that Google Home would be nothing more than a Chromecast stuffed inside of a speaker. The report came from The Information, and claimed this was true because they shared the same microprocessor and WiFi chip as the Chromecast. There really isn't that much to a Chromecast so all Google would need to do is add a speaker, microphone, LED lights and a plastic casing and boom, you have Google Home.

Then in November of last year, iFixit released their teardown of the Google Home and it was confirmed that these two devices shared similar hardware. We learned that Google Home shared the same CPU, flash, and RAM as 2015's Chromecast. This is something that has become incredibly popular with Google selling tens of millions of units since it was first released. Google even has an Ethernet Adapter for the Chromecast that can be purchased from the Google Store for $15.

Reddit user LeonJWood was having trouble connecting their Google Home unit to the wireless network that's available to them. It seems Google Home has difficulties connecting to 802.1x (WPA2 Enterprise) WiFi networks unless you have MAC Auth set up to automatically allow the device to connect. Naturally, this is not allowed at some work and school environments so they were forced to go through an alternative route. They were aware that the Chromecast and Google Home products shared similar hardware, so they purchased the ethernet adapter from Google to see if it would work.

And indeed it did work! All you have to do is connect the ethernet adapter to Google Home via the port in the back (which is hidden by the speaker grill) and it will work. They do warn you that anyone else on the network can see and control your Google Home too. They have also noticed that streaming music to it from their smartphone will cause it to cut out from time to time. This could be caused by other issues though so it might not be limited to this ethernet adapter.

Source: /r/GoogleHome



from xda-developers http://ift.tt/2jKzCLD
via IFTTT

Rumor Reveals the Alleged Camera Specs for the Upcoming BlackBerry Mercury

We first started hearing rumors about the smartphone from BlackBerry that carried the codename Mercury back in June of last year. At the time, all we had to go on was that BlackBerry was working on three new Android devices and they carried the codenames Neon, Argon, and Mercury.

Neon and Argon have both been released since then (which we know of them now as the BlackBerry DTEK50 and the BlackBerry DTEK60. Then at the start of December rumors of actual details for the Mercury smartphone began to leak.
Such leaks suggested that it would come with a QWERTY style keyboard that BlackBerry is so well known for.

Other than a few tidbits about the device being made available on the Verizon Wireless network, we haven't really heard too much about this upcoming smartphone since then. That is, until CES 2017 when they previewed a new smartphone with a QWERTY keyboard that we have been hearing about. Images of this smartphone showed up again in a Twitter post from the official BlackBerry Mobile account last week too.

So it seems BlackBerry currently has plans to launch this new smartphone next month at MWC 2017 in Barcelona. We know it will be manufactured by TCL (the same company who manufactured the DTEK50 and the DTEK60), but that's about as much official information that we know right now. Interestingly enough though, a couple of new rumors claim to reveal the camera sensors that BlackBerry and TCL will be using for the upcoming smartphone.

If true, the device will be equipped with either a Samsung S5K4H8 or Omnivision OV8856 camera sensor on the front. This is an 8MP sensor with 1.12μm pixels that can shoot in 1080p at 30 frames per second. The same source has also revealed that it will be using the same camera sensor the Google Pixel uses on the back of the phone (we did a comprehensive breakdown of why this sensor is special). This is a 12MP Sony IMX378 sensor that can shoot 4K video. We'll have to wait and see if BlackBerry's post processing can match or beat what Google has in their Pixel phones, but the rumor suggests they'll have the hardware to back it up.

Source 1: @rquandt Source 2: @rquandt



from xda-developers http://ift.tt/2jv4sVd
via IFTTT

Here Is How To Disable Dm-verity Warning On The OnePlus 3T

XDA Senior Member th3g1z has finally found a fix to disable dm-verity warning on the OnePlus 3T running Android 7.0. The fix doesn't require flashing anything; you just need to execute two simple fastboot commands to get rid of the dm-verity warning. Head over to the linked thread for more details.



from xda-developers http://ift.tt/2kKoAI5
via IFTTT

samedi 28 janvier 2017

Guide: Installing and Running a GNU/Linux Environment on Any Android Device

As many of you may well be aware, the Android operating system is powered by the Linux kernel underneath. Despite the fact that both Android and GNU/Linux are powered by the same kernel, the two operating systems are vastly different and run completely different types of programs.

Sometimes, however, the applications available on Android can feel a bit limited or underwhelming, especially when compared to their desktop counterparts. Fortunately, you can get a GNU/Linux environment up and running on any Android device, rooted or non-rooted. (The following instructions assume a non-rooted device.)

For those power users on Android tablets, or other Android devices that have large screens (or can plug into a bigger screen), the ability to run desktop Linux software can go a long way towards increasing the potential that an Android device has for productivity.


Setting Up GNU/Linux on Android

To get a GNU/Linux environment set up on your Android device, you only need to install two applications from the Google Play store: GNURoot Debian and XServer XSDL. After you do that, you will only need to run a small handful of Linux commands to complete the installation.

GNURoot Debian provides a Debian Linux environment that runs within the confines of the Android application sandbox. It accomplishes this by leveraging a piece of software called proot, a userspace re-implementation of Linux's chroot functionality, which is used to run a guest Linux environment inside of a host environment. Chroot normally requires root access to function, but by using proot you can achieve similar functionality without needing root privileges.

GNURoot comes with a built-in terminal emulator for accessing its Debian Linux environment. This is sufficient for running command-line software, however, running graphical software requires an X server to be available as well. The X Window System was designed to have separate client and server components in order to provide more flexibility (a faster, more powerful UNIX mainframe could act as the client to X server instances running on much less powerful and less sophisticated terminals).

In this case, we will use a separate application, XServer XSDL, that GNURoot applications will connect to as clients. XServer XSDL is a complete X server implementation for Android powered by SDL that has many configurable options such as display resolution, font size, different types of mouse pointer behavior, and more.


Step-by-Step Guide

1. Install GNURoot Debian and XServer XSDL from the Play Store.

2. Run GNURoot Debian. The Debian Linux environment will unpack and initialize itself, which will take a few minutes. Eventually, you will be presented with a "root" shell. Don't get misled by this – this is actually a fake root account that is still running within the confines of the Android application sandbox.

3. Run apt-get update and apt-get upgrade to ensure you have the most up-to-date packages available on your system. Apt-get is Debian's package management system that you will use to install software into your Debian Linux environment.

4. Once you are up-to-date, it's time to install a graphical environment. I recommend installing LXDE as it is simple and light-weight. (Remember, you're running Debian with all the overhead of the Android operating system in the background, so it's best to conserve as many resources as you can.) You can either do apt-get install lxde to install the desktop environment along with a full set of tools, or apt-get install lxde-core to only install the desktop environment itself.

5. Now that we have LXDE installed, let's install a few more things to complete our Linux setup.

XTerm – this provides access to the terminal while in a graphical environment
Synaptic Package Manager – a graphical front-end to apt-get
Pulseaudio – provides drivers for playing back audio

Run apt-get install xterm synaptic pulseaudio to install these utilities.

6. Finally, let's get the graphical environment up and running. Start XServer XSDL and have it download the additional fonts. Eventually you will get to a blue screen with some white text – this means that the X server is running and waiting for a client to connect. Switch back to GNURoot and run the following two commands:

  export DISPLAY=:0 PULSE_SERVER=tcp:127.0.0.1:4712  startlxde &  

Then, switch to XServer XSDL and watch the LXDE desktop come up onto your screen.

I recommend putting the above two commands into a shell script so that you can easily restart LXDE if you close the session or if you need to restart your device.


Installing Linux Applications

Congrats! You've successfully gotten Debian Linux up and running on your Android device, but what good is running Linux without apps? Fortunately, you've got a massive repository of Linux applications at your fingertips just waiting to be downloaded. We'll use the Synaptic Package Manager, which we installed earlier, to access this repository.

Click the "start" button at the lower-left hand corner, click Run, and then type synaptic. The Synaptic Package Manager will load. From here, simply press the Search button at the top and then type the name of the application you'd like to install. Once you've found an application, right click it and select "Mark for Installation". When you are finished marking packages, click the Apply button at the top to start the installation. Uninstalling packages follows the same procedure, except by right-clicking and selecting "Mark for Removal" instead.

Of course, since this isn't a real Linux installation but rather a Linux environment running on top of, and within the constraints of, Android, there are a couple of limitations to be aware of. Some applications will refuse to run or will crash, usually due to the fact that some resources that are usually exposed on GNU/Linux systems are kept hidden by Android. Also, if a regular Android app can't do something, then usually a Linux application running within Android can't as well, so you won't be able to perform tasks such as partitioning hard drives. Lastly, games requiring hardware acceleration will not work. Most standard everyday apps, however, will run just fine. Some examples include Firefox, LibreOffice, GIMP, Eclipse, and simple games like PySol.


I hope that you find this tutorial useful. While I personally performed these steps on my Google Pixel C, you can do this on most Android devices. Preferably on a tablet device with access to keyboard and mouse peripherals, of course. If you already run a GNU/Linux distribution on your Android device, let us know what you are using it for below!



from xda-developers http://ift.tt/2jCgAqZ
via IFTTT

Rovo89: Update on Development of Xposed for Nougat

The reason why I personally continue to use Android 6.0 Marshmallow on my OnePlus 3, despite OnePlus pushing out the Nougat update for the phone to stable channels, is the presence of Xposed. The Xposed framework and the module ecosystem forms a crucial part of the Android experience that I prefer — to the point where I am willing to forego the latest OS update from the OEM just to savor this sweet fruit.

While Xposed for Nougat is taking a while to come along and some of us do not mind waiting further, it had been a while since we last heard on the progress of the project.

XDA Senior Recognized Developer rovo89 took some time to inform us on the current situation regarding the Xposed for Nougat project:

"It seems that more and more people get nervous about whether (and when) there will be Xposed for Nougat or not, so I felt I should say something.

Why does it take that long? Because with every release, I try to ensure that Xposed integrates nicely with the improvements in the new ART version. The step from Lollipop to Marshmallow wasn't huge. It was an evolution, some things even made it possible to integrate Xposed in a more elegant way. On the whole, it was mainly careful porting than rather innovating.

With Nougat, something fundamental has changed. If you're using Nougat already, you'll have noticed that installations are much faster now. That's because APKs aren't compiled immediately (AOT), but start in (slower) interpreting mode. Sounds bad, but they have enabled JIT, which will quickly compile those methods that are used very often. That will restore the well-known and constantly improving performance of native code. Besides that, ART keeps a list of these frequently used methods ("profiling"). When the device is idle, it finally does the AOT compilation, but based on the profiling data. After that, you get the great performance right after starting the app. JIT is still waiting in case the usage patterns change, and I think it will also adjust the profile and improve the AOT compilation.

That results in various different compilation states and more complexity. Besides that, there were many issues in the past caused by Xposed's need to recompile the whole ROM and all apps: It sometimes caused boot loops when the odex files were too heavily pre-optimized, it blocked quite some storage space to store the recompiled files, and I needed to disable some optimizations like inlining and direct pointer calls. I hope that I can make use of the JIT compiler to avoid that in Nougat. If Xposed knew from where a method is called, it could invalidate the callers' compiled code, so that they would temporarily use the interpreter. If they're important enough, JIT will recompile them.

I have already done a lot of research and experiments for this and I'm currently trying to implement this. But as you can imagine, all of that is much effort and can easily take hundreds of hours….." <continued in forum post>

The main issue as usually is in hobbyist projects, is the allocation of time, and we understand where rovo89 is coming from. Even as the Xposed project currently stands, it includes months of efforts from various developers to help make it possible for the end user to enjoy in such a simple and distributable manner.

As they say, Rome was not built in a day, but the bricks were laid every hour:

"So yes, I'm still working on Nougat support, whenever my free time allows it, but I don't have any idea when it will be done. Once it's done, you'll know."

rovo89

Android isn't perfect and Xposed is that what allows us to fix what the original developer won't. The wait for the ultimate Android fix continues on the newest OS, and we wish rovo89 the best of luck from our end.

You can read the full statement in the forum post. Are you waiting for Xposed too? Let us know in the comments!



from xda-developers http://ift.tt/2kyQF1C
via IFTTT

A Guide to Editing RAW Photography — Get the Most out of Your Smartphone’s Camera

­­

After exploring the RAW capabilities of my OnePlus 3T and Sony NEX-5 cameras, an array of readers responded with questions and comments on RAW photography and their experiences. Many expressed the desire to better learn how to edit photography and particularly how to deal with RAW file formats on both mobile devices and desktop operating systems, and I was thrilled to see such a willingness to engage in something new like RAW photography. I was also deeply happy to have several readers relate to me that I had inspired them to explore photography in general once again or even for the first time –it can come as a surprise to many that the device in their pockets is often their best choice for exploring. In light of these discoveries, my hope is that some assistance for those struggling to begin will continue to encourage those interested in photography, RAW or not, to persevere.

Remembering back to my first forays into photography and editing, I was lucky enough to ease into the prospect bit by bit, beginning with something as simple as the built-in editor in my HTC Incredible 2's gallery app. If I am remembering correctly, I stumbled upon Adobe Lightroom as an app for my iPad 3, which became my go-to editing device until I built my first desktop PC. Over the course of a month or so, I essentially explored each slider and option until I was relatively familiar with the program. I can easily recommend this to anyone with a lot of patience and curiosity, as you will inevitably find your own preferences along the way while also learning to use a powerful editing suite independently.

Nevertheless, having someone to guide you through the very first steps of editing and break down the menacing façade that Lightroom and other editors can present the user is of course extremely useful. I will attempt to be that guide!


First Steps

As several curious and intrepid readers soon discovered, shooting in RAW is not necessarily the most intuitive experience, especially once one goes to find or edit the RAW format files they have produced. As RAW files, especially DNGs, are innately not images straight out of camera, nearly all gallery apps simply will not register that they exist, both on mobile and desktop operating systems. This is not a criticism of gallery apps, but rather an unavoidable reality of RAW formats. As such, you will want to either install one of a handful of free RAW file managers, or bite the bullet and pay for something like Photo Mate R3 (~$8). Adobe Lightroom for mobile devices is likely your absolute best option, being free and well-designed.

For those of you looking for something a bit different, Photo Mate R3 is a fully-fledged mobile editor with almost all of the granular controls that Lightroom and other desktop editors offer. It also provides a gallery function with an array of sorting options, allowing the viewer to, say, selectively view only RAW format images and preview their thumbnails. The only major downside I noted is a lack of granular noise reduction controls of the sort that Lightroom offers. RAW files express all the noise the camera generates (a lot) and can appear rather off-putting if one does not first consider that lossy formats like JPEGs include some often heavy-handed noise reduction that occurs as the RAW data is converted and compressed. RAW lets you decide how much noise reduction is needed, potentially preventing the overly-soft images that smartphone cameras are often infamous for.

If you have access to a computer, there are numerous free options for editing RAW photography like GIMP and Rawtherapee. Rawtherapee offers a genuinely impressive program that is solely dedicated to editing RAW format images and is easy to recommend. There is also Google's free Nik editing suite, which offers a dedicated program for noise reduction to assist those on a budget who can't stand noise but would prefer to keep their editing workflow as mobile as possible.

A brief glance at Rawtherapee 5.0's interface (Rawtherapee).

For those of you willing to fork over the cash, however, my one true photo editing love has always been Adobe Lightroom. It may be an irrational attachment to the program I am simply most familiar with, but I find that it offers a wonderful, intuitive interface and an almost invaluable organizational aspect that allows you to comfortably back up a database of around 40+ GB of edited photos while still retaining exact change histories and the original files. While next to nothing compared to professional photographers or very serious amateurs, I've taken and edited thousands of photos in the 5 years I've been active, and have a history of almost every single one in my Lightroom library.

A small snippet of my primary Lightroom catalog. My edited photos can be found at my Flickr and VSCO accounts.

While verifying that my understanding of Adobe Lightroom mobile was accurate, I discovered that free users can in fact edit RAW formats without a CC subscription! While the free version loses a number of features, it is still well-featured and includes several noise reduction filters, albeit without the ability to control it (aside from picking low, medium, and high reduction options). Like Photo Mate R3, the Lightroom app offers a useful gallery feature that lets you preview RAW thumbnails and filter out non-RAW images. This app is definitely my recommendation for those looking for a slick, user-friendly solution. While experienced users may find some improved utility in Photo Mate R3's broader range of options, Lightroom will be more than enough for most mobile editors. This article provides a great overview of the app and its RAW editing features.


General Tips and Suggestions for Editing Photography

While providing granular tutorials for each of the applications mentioned above is a bit beyond the scope of this article, what I can do is explain some of the more common options you will have at your disposal, regardless of which one you choose to adopt. I will be using the desktop version of Adobe Lightroom (5.4) to demonstrate these features. After the process of finding your RAW files (usually .DNGs for mobile devices) and importing them into your app of choice, you will be presented with several options. Generally speaking, these options will be intended to modify the tone (exposure/lighting), white balance, and color in your photos.

Some of the most useful and intuitive methods of editing in Lightroom are relatively unique to it and even then only in the desktop app. My favorite ways to modify a photo's tone are through the histogram (the graph at the top of the screenshot below), which allows you to click on one of five sections (blacks, shadows, exposure, whites, highlights) and drag them left or right to reduce or increase the prevalence of that specific light type. The tone curve, found below the Basic section, can also be dragged about in a similar fashion, but is generally only needed for slightly modifying a nearly-complete image or recovering detail in an image that was drastically over- or underexposed. This can all generally also be done with the sliders you can see on the right, but this takes somewhat longer and is also not nearly as fun! A great exploration of the utility of histograms and how to read them can be found here.

Two images and their related histograms.

Traveling down the options in the menu pictured below, we begin with 'WB' or white balance. This is used to improve accuracy of the color representation in photos, accomplished by modifying the temperature and tint in order to direct the picture towards your preferred outcome, which may include fixing imperfect white balancing in camera. In desktop and mobile Lightroom, you have the option of selecting the eye dropper, which effectively auto-corrects white balance once you direct it to a point on your photo that you know should be a neutral grey or white.

Tone settings come next, beginning with options for exposure and contrast. Exposure modifies the global brightness unselectively. Contrast further darkens darker areas of the image and brightens lighter areas. After these more heavy-handed options, there are more precise controls that can also be controlled through the histogram on top, as I previously explained. The highlights slider will modify only the brightest sections of the image, allowing you to tame overexposed images (you may have seen or heard the term "blown highlights"). Shadows, on the opposite hand, can help recover lost detail in dark areas of images. Lastly, Whites and Blacks intuitively allow pixels leaning towards white or black to be made brighter or darker. Attentive readers may notice a theme so far of combinations of controls that offer large changes (whites, blacks) with controls that offer more detailed modifications to smaller parts of the image (highlights, shadows).

Continuing this trend, Clarity is effectively a method of only adding contrast to mid-tones (mid meaning middle of the histogram). In doing so, the Clarity slider can give the benefit of added contrast while preventing the noise or grain (and often an uglier image) that can come overuse of the global Contrast slider. This option is generally unique to Lightroom, but it can be partially replicated by experimenting with white and black levels (increased contrast would mean darker blacks and brighter whites). This method won't add edge detail like Clarity, but it will more subtly add contrast.

Saturation and Vibrance are the last basic settings one may frequently want to use. Saturation is the color equivalent of Exposure, allowing the user to globally deepen or lighten all colors in an image. Vibrance helps to avoid the downfall of global saturation changes by only adjusting the least (+) or most (-) saturated colors.

Finally, there are several more complex and granular settings that can be found in Lightroom and other desktop editing suites. Something I often find myself using is detailed saturation, hue, and luminance control (on the right), giving me the ability to, say, recover oversaturated blues or greens, or better express the yellows and oranges in a sunset photo with subpar white balance. The Detail section (on the left) is where noise reduction and sharpening settings can be found, very useful options to have when editing RAW files. Lightroom helpfully provides a small window with a highly magnified view, which makes it considerably easier to avoid introducing ugly artifacts or obscuring detail when modifying sharpness and adding noise reduction.

                     


Practice, Practice, and More Practice!

As a tried-and-true trope of many a guide, my best suggestion for those just beginning to stretch their photography-editing legs is to not give up and keep trying. Mistakes will be made and modifications will be overdone, but in time you will begin to develop a more instinctive understanding of editing and likely come into a style and workflow of your own. Mine has taken many years to develop and I clearly remember struggling at first, as well as taking a look at photos I'd edited years ago only to be aghast at the aesthetic decisions of past-me. I'm still learning more than 5 years in, and I even managed to learn a couple new things about editing photos in the process of writing this. In all its breadth, photography is essentially an activity with constant opportunity for learning, and rather than being daunting, it simply makes it that much more exciting and rewarding.

Amidst the humbling response my previous article received, multiple readers shared some of their own impressive smartphone photography and blew me away. If you have taken any photos with your phone that you are proud and would like to share, feel free to post them in the comments below this article, as well as on its corresponding Facebook posts or tweets. An upcoming article in this series will include a collection of user-submitted photography, so don't miss out!

Also ahead will be a brief tutorial on how to use the manual mode available on many modern smartphone cameras in order to best take advantage of their capabilities. 



from xda-developers http://ift.tt/2kyrDQr
via IFTTT

Moto G5 Passes Through the FCC, Likely to be Unveiled at MWC 2017

As we inch closer to one of the hottest events globally for smartphones in the form of the Mobile World Congress 2017, we witness some more phones getting leaked on the way. This time, we get new information on the Moto G5, which has passed through the FCC.

The FCC filing does not reveal a whole lot of spec info on the Moto G5, but it does let us know that the device will be coming with a 3,000 mAh battery. This phone will also support a form of quick charging, likely called Turbo Charging based on past naming conventions. This is inferred from the adapter specifications listed, as the included charging adapter is capable of outputting 14.4W at 9V/1.6A, 12V/1.2A and 5V/1.6A. This is a nice change for the non-Plus variant, as only the Moto G4 Plus included the Turbo Charger in the box while the Moto G4 came with a puny 5V/0.55A charging brick.

The other notable point in the FCC filing is the inclusion of NFC. Previously, the main Moto G4 lineup still did not come with NFC capabilities, although the "Play" variants did sport them. Adding NFC to the base model implies that all the other variants will possess this.

Motorola does have an event planned for Mobile World Congress on 26th February 2017, where the Moto G5 and the Moto G5 Plus are likely to be unveiled. As for specs, the Moto G5 and G5 Plus are likely to stick with a 5.5″ display and switch to the Qualcomm Snapdragon 625 SoC. Leaked images of the G5 Plus have been floating around, but we will have to wait on for more concrete information.

What are your thoughts on the Motorola Moto G5 and Moto G5 Plus so far? Let us know in the comments below!

Source: FCC Via: MotoG3.com



from xda-developers http://ift.tt/2kdRQpJ
via IFTTT

vendredi 27 janvier 2017

AutoVoice Integration Finally makes its way to Google Home, Here’s how to Use It

After a month in Google's approval limbo, AutoVoice has finally been approved for use as a third-party integration in Google Home. With AutoVoice integration, you can send commands to your phone that Tasker will be able to react to, allowing you to perform countless number of automation scripts straight from your voice.

Previously, this required a convoluted workaround involving IFTTT sending commands to your device via Join, but now you can send natural language commands straight to your device. We at XDA have been awaiting this release, and now that it's here, we'll show you how to use it.


The True Power of Google Home has been Unlocked

The above video was made by the developer of AutoVoice, Joao Dias, prior to the approval of the AutoVoice integration. I am re-linking it here only to demonstrate the possibilities of this integration, which is something we can all now enjoy since Google has finally rolled out AutoVoice support for everyone. As with any Tasker plug-in, there is a bit of a learning curve involved, so even though the integration has been available since last night, many people have been confused as to how to make it work. I've been playing with this since last night and will show you how to make your own AutoVoice commands trigger through speaking with Google Home.

A request from Joao Dias, developer of AutoVoice: Please be aware that today is the first day that AutoVoice integration with Google Home is live for all users. As such, there may be some bugs that have yet to be stamped out. Rest assured that he is hard at work fixing anything he comes across before the AutoVoice/Home integration is released to the stable channel of AutoVoice in the Play Store.


Getting Started

There are a few things you need to have before you can take advantage of this new integration. The first, and most obvious requirement, is the fact that you need a Google Home device. If you don't have one yet, they are available in the Google Store among other retailers. Amazon Alexa support is pending approval as well, so if you have one of those you will have to wait before you can try out this integration.

Once you have each of these applications installed, it's time to get to work. The first thing you will need to do is enable the AutoVoice integration in the Google Home app. Open up the Google Home app and then tap on the Remote/TV icon in the top right-hand corner. This will open up the Devices page where it lists your currently connected cast-enabled devices (including your Google Home). Tap on the three-dot menu icon to open up the settings page for your Google Home. Under "Google Assistant settings" tap on "More." Finally, under the listed Google Home integration sections, tap on "Services" to bring up the list of available third-party services. Scroll down to find "AutoVoice" in the list, and in the about page for the integration you will find the link to enable the integration.

Once you have enabled this integration, you can now start talking to AutoVoice through your Google Home! Check if it is enabled by saying either "Ok Google, ask auto voice to say hello" or "Ok Google, let me speak to auto voice." If your Google Home responds with "sure, here's auto voice" and then enters the AutoVoice command prompt, the integration is working. Now we can set up AutoVoice to recognize our commands.


Setting up AutoVoice

For the sake of this tutorial, we will make a simple Tasker script to help you locate your phone. By saying any natural variation of "find my phone", Tasker will start playing a loud beeping noise so you can quickly discern where you left your device. Of course, you can easily make this more complex by perhaps locating your device via GPS then sending yourself an e-mail with a picture taken by the camera attached to it, but the part we will focus on is simply teaching you how to get Tasker to recognize your Google Home voice commands. Using your voice, there are two ways you can issue commands to Tasker via Google Home.

The first is by speaking your command exactly as you set it up. That means there is absolutely no room for error in your command. If you, for instance, want to locate your device and you set up Tasker to recognize when you say "find my phone" then you must exactly say "find my phone" to your Google Home (without any other words spliced in or placed at the beginning or end) otherwise Tasker will fail to recognize the command. The only way around this is to come up with as many possible variations of the command as you can think of, such as "find my device", "locate my phone", "locate my device" and hope that you remember to say at least one variant of the command you set up. In other words, this first method suffers from the exact same problem as setting up Tasker integration via IFTTT: it is wildly inflexible with your language.

The second, and my preferred method, is using Natural Language. Natural Language commands allow you to speak naturally to your device, and Tasker will still be able to recognize what you are saying. For instance, if I were to say something much longer like "Ok Google, can you ask auto voice to please locate my device as soon as possible" it will still recognize my command even though I threw in the superfluous "please" and "as soon as possible" into my spoken command. This is all possible thanks to the power of API.AI, which is what AutoVoice checks your voice command against to interpret what you meant to say and return with any variables you might have set up.

Sounds great! You are probably more interested in the second option, as I was. Unfortunately, the Natural Language commands are taxing on Mr. Dias's servers so you will be required to sign up for a $0.99 per month subscription service in order to use Natural Language commands. It is a bit of a downer that this is required, but the fee is more than fair considering how low it costs and how powerful and useful it will make your Google Home.

Important: if you want to speak "natural language commands" to your Google Home device, then you will need to follow these next steps. Otherwise, skip to creating your commands below.


Setting up Natural Language Commands

Since AutoVoice relies on API.AI for its natural language processing, we will need to set up an API.AI account. Go to the website and click "sign up free" to make a free account. Once you are in your development console, create a new agent and name it AutoVoice. Make the agent private and click save to create the agent. After you save the agent, it will appear in the left sidebar under the main API.AI logo.

Once you have created your API.AI account, you will need to get your access tokens for AutoVoice can connect to your account. Click on the gear icon next to your newly created agent to bring up the settings page for your AutoVoice agent.

Under "API keys" you will see your client access token and your developer access token. You will need to save both. On your device, open up AutoVoice beta. Click on "Natural Language" to open up the settings page and then click on "Setup Natural Language." Now enter the two tokens into the given text boxes.

Now AutoVoice will be able to send and receive commands from API.AI. However, this functionality is restricted until you subscribe to AutoVoice. Go back to the Natural Language settings page and click on "Commands." Right now, the command list should be empty save for a single command called "Default Fallback Intent." (Note in my screenshot, I have set up a few of my own already). At the bottom, you will notice a toggle called "Use for Google Assistant/Alexa." If you enable this toggle you will be prompted to subscribe to AutoVoice. Accept the subscription if you wish to use Natural Language commands.


Creating Tasker Profiles to react to Natural Language Commands

Open up Tasker and click on the "+" button in the bottom right hand corner to create a new profile. Click on "Event" to create a new Event Context. An Event Context is a trigger that is only fired once when the context is recognized – in this case, we will be creating an Event linked to an AutoVoice Natural Language Command. In the Event category, browse to Plugin –> AutoVoice –> Natural Language.

Click on the pencil icon to enter the configuration page to create an AutoVoice Natural Language Command. Click on "Create New Command" to build an AutoVoice Command. In the dialog box that shows you, you will see a text input place to input your command as well as another text entry spot to enter the response you want Google Home to say. Type or speak the commands you want AutoVoice to recognize. While it is not required for you to list every possible variant of the command you want it to recognize, list at least a few just in case.


Pro-tip: you can create variables out of your input commands by long-pressing on one of the words. In the pop-up that shows up, you will see a "Create Variable" option alongside the usual Cut/Copy/Select/Paste options. If you select this, you will be able to pass this particular word as a variable to API.AI, which can be returned through API.AI. This can be useful for when you want Google Home to respond with variable responses.

For instance, if you build a command saying "play songs by $artist" then you can have the response return the name of the artist that is set in your variable. So you can say "play songs by Muse" or "play songs by Radiohead" under the same command, and your Google Home will respond with the same band/artist name you mentioned in your command. My tutorial below does not make use of this feature as it is reserved for more advanced use cases.


Once you are done building your command, click finished. You will see a dialog box pop up asking for what you want to name the natural language command. Name it something descriptive. By default it names the command after the first command you entered, which should be sufficient.

Next, it will ask you what action you want to set. This allows you to customize what command is send to your device, and it will be stored in %avaction. For instance, if you set the action to be "findmydevice" the text "findmydevice" will be stored in the %avaction variable. This won't serve any purpose for our tutorial, but in later tutorials where we cover more advanced commands, we will make use of this.

Exit out of the command creation screen by clicking on the checkmark up top, as you are now finished building and saving your natural language command. Now, we will create the Task that will fire off when the Natural Language Command is recognized. When you go back to Tasker's main screen, you will see the "new task" creation popup. Click on "new task" to create a new task. Click on the "+" icon to add your first Action to this Task. Under Audio, click on "Media Volume." Set the Level to 15. Go back to the Task editing screen and you will see your first action in the list. Now create another Action but this time click on "Alert" and select "Beep." Set the Duration to 10,000ms and set the Amplitude to 100%.

If you did the above correctly, you should have the following two Actions in the Task list.

Exit out of the Task creation screen and you are done. Now you can test your creation! Simply say "Ok Google, ask auto voice to find my phone" or any natural variation of that that comes to mind and your phone should start loudly beeping for 10 seconds. The only required thing you have to say is the trigger to make Google Home start AutoVoice – the "Ok Google, ask auto voice" or "Ok Google, let me speak to auto voice" part. Anything you say afterwards can be as freely flowing and natural as you like, the magic of API.AI makes it so that you can be flexible with your language!

Once you start creating lots of Natural Language Commands, it may be cumbersome to edit all of them from Tasker. Fortunately, you can edit them straight from the AutoVoice app. Open AutoVoice and click on "Natural Language" to bring up its settings. Under Commands, you should now see the Natural Language command we just made! If you click on it, you can edit nearly every single aspect of the command (and even set variables).


Creating Tasker Profiles to react to non-Natural Language Commands

In case you don't want to subscribe to AutoVoice, you can still create a similar command as above, but it will require you to list every possible combination of phrases you can think of to trigger the task. The biggest different between this setup is that when you are creating the Event Context you must select AutoVoice Recognized rather than AutoVoice Natural Language. You will build your command list and responses in a similar manner, but API.AI will not handle any part of parsing your spoken commands so you must be 100% accurate in speaking one of these phrases. Of course, you will still have access to editing any of these commands much like you could with Natural Language.

Otherwise, building the linked Task is the same as above. The only thing that differs is how the Task is triggered. With Natural Language, you can speak more freely. Without Natural Language, you have to be very careful how you word your command.


Conclusion

I hope you now understand how to integrate AutoVoice with Google Home. For any Tasker newbies out there, getting around the Tasker learning curve may still pose a problem. But if you have any experience with Tasker, this tutorial should serve as a nice starting point to get you to create your own Google Home commands. Alternatively, you can view Mr. Dias' tutorial in video form here.

In my limited time with the Google Home, I have come up with about a dozen fairly useful creations. In future articles, I will show you how to make some pretty cool Google Home commands such as turning on/off your PS4 by voice, reading all of your notifications, reading your last text message, and more. I won't spoil what I have in store, but I hope that this tutorial excites you for what will be coming!



from xda-developers http://ift.tt/2kCU2rs
via IFTTT