I was just wondering, why do we try to optimize stock .apks? If HTC or T-Mobile released them, I doubt they would purposely bloat them up or anything to make them even the slightest bit slower. What if our "optimizing" instead has zero or in fact adverse side effects? My point is if it could be further optimized, then HTC would do it. (This is just out of curiosity)
Well, to be brutally honest, I highly doubt T-mo, HTC, etc. actually goes through their newly made apk, to look for any and all optimizations right off the bat, their most likely far too busy, and odds are, they could care less, as long as it fits in the first place.
As far as adverse effects, I for one haven't noticed anything slow/unacceptable about any apps, I've ever used
And most of the optimizations are so that the apk will fit in the actual ROM release ( guessing here, I'm not really a 'developer' ;P ) givin' that the rom contains many apps/etc.
Although with most apps2sd stuff going on now, I don't see the point, but it's still valid
(Look at in the eyes of a user, if a file is 20mb, and takes 20secs to dl, or 10mb and takes 10secs to dl, for the exact same thing, which would you pick?)
Hope I cleared up some stuff, or at least brought up some more/different questions xD
The real reason? So "devs" can type "ZOMG! Optimized all apks!!!" on their rom descriptions. The supposed "optimization" boils down to two steps: using optipng or roptipng; a tool that does some agressive, yet still lossless, compression passes to png formated images and reduces their sizes by certain amounts (depending on previous compression, up to ~60%), and then running zipalign; a tool from the android sdk that aligns the zip/apk to dalvik memory so that loading and execution become a tiny bit faster.
coolbho3000 made a windows bat script that does all that automatically plus a third step, compressing the apk/zip to anorexic levels with an agressive, mx5 zip compression. This usually cuts down the size of the /system/app folder a reported 10%.
Why do it? Size. It's the ONLY advantage. Most people seem to forget that compression requires decompression at runtime, and the more aggresive the compression algorithm used, the more cpu cycles and memory it requires to decompress, in fact, making your whole experience slower and less enjoyable. About the only thing that's beneficial in the whole "optimization" is running zipaling, but then whatever gain is lost by the heavy compression everywhere.
Don't get me wrong, shrinking everything has it's advantages, for example, I was able to whip a WHOLE Hero build into the space of the death spl's /system partition because of compression, but actual usage was frustrating because the system would so often run out of memory (or, if used with 64 MB compcache, be just too slow). I actually created another build (just a test) with no compression anywhere, just zipalign, and it was so enjoyably fast, but having only 20 MB left in /data after a fresh install is no fun...
So there you go! roptipng>7za>zipalign make all masters of "optimization" (we have only about 1 person here in the forum who's actually gone on the nitty-gritty of editing baksmali dumps and actually trying to make the damn thing run faster/properly).
If you have ever had T-Mobile motorola Z3, then you kow that T-Mobile DOES slow down the os on phones they can, or at least some of their proprietary apps do, like my faves, cause thats what got me into modding a lot, (and comp programmin ) i just wanted a faster z3 lol.
Related
if compcache compresses ram to fit more things in it will also have to de compress things to use them. So there is more stored in the ram but it takes longer to reach them.
what i wanna know is how long it takes to compress and decompress to see if itt is worth it. Any ideas?
That was my way of thinking too. I played with comp, swap, comp+swap, and comp backing swap. My user.conf file is attached to my signature. Swap only is what worked best for me.
It more than anything depends on how the phone is used. If your using apps that take a lot of RAM but don't take too much CPU power (namely something that has huge images, or something like co-pilot which pushes ram to its limits, and background processes), then compcache is good. However, using CPU intensive stuff and applications, swap only is much better. Compcache is good for things that run in the back that don't have to be accessed all the time, such as the launcher.
feel free to correct me/wreck me
thanks for the replies guys the things i doo need swap more than anything so ill stick with swap
B-man nailed it, like I said I'm running swap w/o compcache because as I said it works best for me. CoPilot is laggy and causes background programs to FC on this config, but i rarely use it in favor of google maps which is why it doesn't bother me.
Just curious, I've been flashing the latest nightlies and in the cyanogenmod settings I see 'use compcache'. I have it unchecked, any difference if I check it. I found a you tube video about 2 phones running with and without compcache. Compcache seemed to load pages better over time, but not initially. Any help would be much appreciated
Copied from this post on another thread..
Very roughly you have a finite amount of memory (RAM). When memory is accessed it is virtual addressing, so an application is given a piece of memory, but this isn't real RAM, the operating system manages this and maps it to where the data really is. Because of this system, the OS can give out more memory than is actually available. It can then store some of this memory on a storage medium and "swap" it with some other programmes memory when one is needed and the other isn't. This is how swap works.
With compcache, instead of storing the dormant memory on a hard disk it is compressed and stored in the RAM itself on a virtual disk. This takes up some RAM, but because it is compressed, more RAM is spare tha n if the data were left in memory as it is. Again this has the effect that more memory space can be handed out than the RAM that is really there.
Because Android manages applications so that when memory runs out it just closes applications running in the background, more applications can reside in the larger virtual memory space than before, making multi-tasking more pleasant and responsive.
I know that nfinitefx45 took compcache out of his latest builds in both the Stock and ZenHeroFX ROMs. I don't know all the technical reasons behind it, but I think it just didn't improve performance enough to be worth leaving it in. Granted those are Sense-based ROMs though which are generally a little slower and "bloatier" in nature than AOSP, so the performance difference maybe be greater in CM.
chromiumleaf said:
I know that nfinitefx45 took compcache out of his latest builds in both the Stock and ZenHeroFX ROMs. I don't know all the technical reasons behind it, but I think it just didn't improve performance enough to be worth leaving it in. Granted those are Sense-based ROMs though which are generally a little slower and "bloatier" in nature than AOSP, so the performance difference maybe be greater in CM.
Click to expand...
Click to collapse
Thank you for the response, just wasn't sure. Since Darch left it unchecked, I figured I would ask
I've been trying to recover some space on my Nexus one and have been largely successful in doing so with a combination of tricks, but while looking at my partitions and tallying up the numbers something didn't seem to be adding up right; the unit is supposed to have 512MB flash, but I was coming up about 60MB short.
I found this thread which discusses the partition layout of the N1; the sizes they show all seem to match up well with what my device shows. Now, the hex address of the end of the last partion (user data) ends just a couple MB short of 512MB; the start of the first partion (misc) however seems to start over 60MB into the memory space... is there a reason for this, and if so what's occupying those lowest 63.75MB of flash space?
Baseband, AKA "radio", is what you're looking for. Unless you want your Nexus not to boot anymore, it's not advisable to try and repartition baseband space.
Instead of working hard and uselessly wasting effort, use A2SD or any other kind of linking to SD-mounted EXT partition. No matter what you try, Nexus doesn't have nearly enough internal space for any common use.
That answers my question, thank you.
As I mentioned in my original message, I was successful in freeing enough space on my device; a combination of moving apps and libraries (copy to system/lib and symlink back to original location) into the system partition and clearing out bulky or unnecessary apps has left me with over 60MB of free data space without even having to resort to fancy A2SD business (just normal android move to SD card). I was simply curious about what was filling in the remaining space on the flash chip and the radio pretty much fits the bill.
As someone with pretty average amount of user apps (a bit less than 100) and 700 MB user space taken, I can't see the point in doing what you mentioned for anything but pure fun. But if that suits you - I won't argue.
Well, by my app drawer I'm sitting at ~125 (44 purely in data, 34 moved to SD with standard android method, rest either native system or moved there) apps, and if my "puny" N1 can have 60MB free and not even need ext-style A2SD I'm not quite sure how the N1 doesn't have "nearly enough internal space for any common use". Seems to me the point (not "pure fun" as you dismissively imply) of doing what I've done is to able to keep using a pretty decent phone that still has more than enough storage space if you make the least bit of effort to manage it.
But hey, who am I to judge if you prefer to buy whatever latest phone the carriers tell you you should want every 12 months just so they can cram more bloated apps on it?
I appreciate the answer to my initial question about what's using the lowest block of flash storage (I was simply curious about what was using it - I couldn't find information if it was flash overprovisioning or some other low-level portion of the OS using it), but I don't really appreciate the unnecessary negative attitude and commentary for what was just a simple question. Thanks anyways.
I guess you didn't understand my point(s). I'll elaborate:
First and foremost, my point is this: N1 is a crap of a phone. Having it for over 1 year, and trying to adapt it to my wife for 3 or 4 months later on before giving up on it, taught me that this phone can't be dealt with by anyone who doesn't want to accept its touchscreen limitations. It was so refreshing having the phone (MT4G in my case) just react without fuss and not expecting it to crap out at any given time - not even mentioning the huge speed-up. The price of "upgrade" (selling the N1 and buying any previous-generation phone, like DHD/MT4G/DS/DZ) can be brought down to as low as $50, and the benefits are huge, I already wrote it a couple of times on the forum.
To the storage point (actually, several points):
N1's NAND is painfully slow, compared to anything, even to regular Class 2 SD card. You can try copying any large file from NAND to EXT and back, from NAND to NAND and from EXT to EXT and see what takes more time. You're likely to discover that A2SD actually adds performance instead of hurting it.
My app data (/data/data/*) alone takes roughly the same space as your whole internal /data storage has, so I guess the amount of apps alone isn't that meaningful of a measurement. I still call it a perfectly normal and average data usage - I don't have anything special installed, no heavy games that save 200+ MB of data on internal memory, just apps like Goggles, Flash, iGO and a couple of other big apps that aren't movable by normal means (and tend to crap the system out when they're forced to move). The problem in your approach is not even the one-time amount of work you had to invest to make that space, but the amount of work you'll have to invest to keep the phone running - moving system updates to /system upon every update, clearing browser cache, etc - generally, keeping things in constant check. Free time is something you learn to appreciate when you don't have enough, and more hassle-free setup is always preferred IMHO.
But again, different people have different needs, so while I can post my point of view - I don't argue with yours.
Thank you for elaborating, actually; it clarifies much that was not apparent in your earlier posts. This thread isn't really about the pros and cons of the N1 so all I'll say is that the advantages of the N1 (small size, OLED, build quality, tricolor trackball LED, etc..) still outweigh its manageable downsides for me, even compared to very modern handsets - so I'll stick with it until I can find a suitable upgrade that I'm happy with (is it so hard for HTC to make a <=4" qHD AMOLED? Seriously...).
Your point about the NAND being slow is interesting; this is something I hadn't heard and will have to benchmark; if it pans out it would be a point in favor of A2SD, but not really in favor of replacing the device over it
The upkeep I don't find that bad; Titanium backup makes integrating updated system apps a single touch for the batch, and I've only got a couple libraries symlinked into system that are unlikely to be frequently updated. With the space I've freed I shouldn't need to clear browser caches nearly as often - so it actually saves me time and frustration regularly for the one-time effort.
Thanks again for taking the time to reply and to clarify your points
If a2sd+ doesn't work for you you could do custom mtd partitions like I did using fireats custom mtd if u google it u will find it basically you can shrink ur system partition down to almost half because it is being wasted I mean whatever size u want to define it as. I'm using miui and my system partition that i defined is 120 mb (4 mbs are free just in case) and my cache partition is 15 mb. Now that leaves 301 mbs free for user data. I have 107 user apps installed about 10 games or so and I still have 120 mb free for user data for me that's more than enough. This way ur phone won't be buggy because u will only use the system partition for ur rom again I would suggest miui since it takes minimal space and is very smooth and stable with amazing battery life (I use tiamat kernel). Hope this helped
---------- Post added at 06:57 PM ---------- Previous post was at 06:52 PM ----------
Oh if u use a2sd in conjunction with custom mtd then u can have close to 750 mb of space available for user data given that ur sd ext partition is 512 mb (which was stable for me using 8gb card) that's basically rivaling new phone memory so don't just call the nexus one off just yet it can surprise anyone that knows how to play with it or stuck with it for 2years like me lol.
I've already been using root access with shell and titanium backup to move apps and libraries into the system partition without resizing it, so I'm already using the available space there. The only major difference is you've dramatically shrunk your cache partition from the default of (IIRC) 100MB down to 15MB; this seems like a pretty huge reduction, and I feel this would have performance implications, especially when running larger apps...
Other than that, if I find my current space as set up proves to be inadequate in the future (it seems just fine for now) then a2sd appears to be the best option for those who need even more additional space on a nexus one.
15 mb is more than enough for cache partition unless u plan to download huge 3d games and as we all know gaming isn't the reason that we have held on to nexus one for so long I haven't seen any app large enough to not install due to my partition size. I messed around with that too first I had it set at 5 mb but that made market force close every time then I set it at 10 was stable but large apps couldn't download and then I tried 15 and hasn't given me a single problem. Otherwise all that space is wasted so why not dedicate it to user data? With 20 mb partition u can download almost all games that can function on nexus one but since I'm not a big mobile gamer I stuck with 15 mb cache.
Most normal programs don't use /cache.
To fix your cache market issue:
Code:
su
busybox mv /cache/download /sd-ext/download
ln -s /sd-ext/download /cache/download
If you don't have a sd-ext you could use /sdcard/download instead. The directory will already exist if you've downloaded anything from the browser, so I just remove /cache/download before linking. I used to get package file invalid errors from this setup though...
Ti backup will also let you move stuff to /system and re-odex your rom instead of shrinking /system. Sure, everytime system stuff updates you need to click a few times, but unless space is real tight, it works fine. The re-odex-ed rom seems to boot faster for me than with external dalvik-cache, too, but that could just be me pretending. I've never busted out the stop-watch.
I like to keep apks on a2sd and put dalvik-cache on internal memory. It's kinda like raiding the two interfaces together to get the sum of the bandwidths of both when launching a program.
siberx: I'm sticking with the N1 until I find a decent phone that has been designed to fit in my pocket instead of sitting in a purse or on the bar too... I considered the glacier for a while, but, near as I can tell, the only benefits of going there are better touch screen and gpu.
I used firerat's mtd patch to rejigger my girlfriend's desire paritions to something more sensible (something like a 230mb system partition stock? ridiculous!) and that worked smashingly; the same trick against my N1 didn't go so well though. Seems like my Nexus with CM6.1 on it is still using the cache partition for dalvik at least partially, and I think shrinking it down to 20mb made it too small to boot right. Not a big deal anyways; I've got enough space to work with as is
I tried to do some benchmarks on my internal flash for comparsion, but the only decent benchmark I could find (without getting manual about it on command line) was Passmark's mobile benchmark; problem is they wan't 90MB free to run the internal memory benchmark, so my 60MB isn't cutting it for that
Anybody know of a decent benchmark that will bench both internal and SD read/write speeds that doesn't need such a huge chunk of free space?
ezdi: I considered for awhile buying a G2 for the faster CPU/GPU and improved touchscreen, but ultimately decided against it due to the extra weight and thickness (combined with the nexus' other advantages like OLED and tricolour LED). Eventually some manufacturer will figure out there's a still a market for compact high-end phones...
ezdi said:
siberx: I'm sticking with the N1 until I find a decent phone that has been designed to fit in my pocket instead of sitting in a purse or on the bar too... I considered the glacier for a while, but, near as I can tell, the only benefits of going there are better touch screen and gpu.
Click to expand...
Click to collapse
Better touch screen is a reason enough by itself.
GPU, much faster and bigger internal memory (both system and data), much faster and bigger RAM, and most of all - 90% HW-compatibility to one of the most popular devices in the world (DHD) - means staying updated and speedy with ROMs that fly where they crawl on Nexus (if they exist at all). Plus - all ROMs besides ICS are 100% functional, CM, MIUI, Sense 3/3.5, you name it. And if it's not enough, 20% hassle-free overclock is standard.
From quite satisfied Glacier owner.
Hi all
I very much doubt this post belongs here, but since i am a new member , have no chice but to put it here.
I have recently accidentally purchased a nexus7 (grouper) and decided to try some of the custom roms available, in the end i tried most of them.
Here are my comparison results.
As with all things putting an "order" on the roms is subjective, the "best" rom is the one that works for the individual,with that individuals particular needs and wants.
As an embedded design engineer i am obsessed with speed and efficiency, memory usage, putting as much data on the screen as possible, then finally compatibility and lack of bugs ( in that order ), my results are biased by that order.
Disclaimer........
1) Ok i know that android is not exactly ideal if looking for an efficient well written os ( who the F came up with the idea of running everything in a VM!!! a minimal linux system will run in 85 mb ram with a full xserver, android STARTS at 400mb+, sitting there doing nothing !!!, thats near my cut down windows7 running on an i7 ) but before a decent linux distro is ported to full touch am stuck with it
2) For the N7 the requirement to change the DPI is essential, it is VERY stupid that the os does not check and change this automatically, otherwise its like having your 1920x1080 monitor stuck on 1200x600 permanently, very stupid !!! So am assuming the roms without this facility will function at 160dpi correctly, i have not been able to check this for every rom
All roms were flashed by TWR ( latest ), wipe of cache, dcache, factory reset, system
Memory was checked by settings>apps>running , immediately after first boot, then after a clean reboot, then after a cache,dcache wipe and clean reboot
IMPORTANT NOTE.....
Google apps ( gapps)...... in terms of memory usage gapps is the worst peice of #%%#%%$#$% sh$%^%$%t bloatware i have ever encountered in my os experience ( and given that i spend most of my time in windows, that saying something ).
On any of the roms i have tried flashing gapps adds at LEAST 150mb of unneeded memory usage, and depending on the rom that can go up to 250mb. Even using a minimal gapps with only phonesky,framework,login and setup, still produces a significant 50+mb hit, unfortunately in many cases some of gapps is essential, and some os functions are broken without at least the framework.
This situation seems unacceptable to me, all the roms should function correctly without gapps, and without the bloat, if some dev does not address the situation i will.
1) Prime Grouper D03-06
Tablet ui...... yes
DPI changer ...yes
Size custom nav bar ....yes
Speed....... good
Response ... good
This is first on the list for 1 good reason, memory usage!!!
Before flashing any kind of gapps
First reboot 360mb
Second reboot 320mb
Cache wipe reboot 270mb
Subsequent reboots 266mb ( stable )
Obviously team Vanier know their sh**t, their os is running in nearly HALF the memory space of other roms , and on the whole with few bugs, but whats really impressive is the memory usage is incredibly stable for a android os, zero memory leaks, leave the device at 302 mb ( say you opened an app and it was cached etc ) for 24 hours check again and lo and behold its still roughly 320mb ( obviously internal processes are moving this number a little but only by <>5mb ), this is not the case with stock rom or many other custom roms.
Everything is not all roses though, using prime without flashing gapps at all, exposes quite a few bugs, notification panel does not work at all, clock settings is broken, a few apps fail ( mx player fails for a start ) and others.
Flashing a super minimal gapps, fixes most of the issues, notifications are back, a lock screen turns up, all the settings appear fixed, BUT it also knocks out vanires keybord and totally ruins the memory handling
After flashing "micro" gapps ( and going through setup, adding valid account etc )
First reboot 430mb
Second reboot 360mb
Cache wipe reboot 320mb
Subsequent reboots 360mb ( NOT STABLE, can vary up to 400mb+ with time )
Obviously would love to see PRIME fix the outstanding bugs, and produce a custom set of gapps apk's that dont screw this fine rom.
2) Smooth ROM v5
Tablet ui...... yes
DPI changer ...no
Size custom nav bar ....yes
Speed....... good
Response ... excellent ( the best )
Gapps are included
First reboot 470mb
Second reboot 4200mb
Cache wipe reboot 400mb
Subsiquent reboots 440mb ( NOT STABLE can go 500mb+ )
This rom is second due to a totall lack of bugs, after much mucking around i can not find a single setting or feature that does not work correctly, plus the UI is very very responsive, the best of all i have tried. It would be top but for the lack of a DPI change option and the fact that memory usage is nearly double that of PRIME.
will post the other 20 odd results later
What's more interesting is that identical apps on the galaxy S3 take up 2-3 times more memory. 1GB is fine on the N7 but you have to do a lot of fiddling to get the S3 running with 1GB. Must be down to the Tegra architecture. Smooth 5 runs great on the N7 (especially with greenify app)
Sent from my Nexus 7 using XDA Premium HD app
3) Cookies_Cream-1.3.1
Tablet ui...... yes ( built in as standard )
DPI changer ...yes ( 160 dpi native )
Size custom nav bar ....yes
Speed....... ok
Response ... ok
Before Gapps
First reboot 560mb
Second reboot 560mb
Cache wipe reboot 560mb
Subsequent reboots 560mb ( stable )
Although a memory hungry beast this rom is optimised for the n7 resolution AS STANDARD, you get full true tablet ui, AND with the PARANOID ANDROID framework, it is ultimately compatible with any app at any resolution, and best of all it all works!! no bugs that i can see, although memory use is very high at least it is quite stable before gapps.
If you want the full tablet experience out of the box then consider this.
I have not tried a memory test after gapps, 560 was too high for me without gapps let alone with
4) BeatMod_CrystalClear_v2.3.zip
Tablet ui...... yes
DPI changer ...no
Size custom nav bar ....yes
Speed....... good
Response ... good
Gapps are included
First reboot 480mb
Second reboot 4100mb
Cache wipe reboot 400mb
Subsequent reboots 440mb ( NOT STABLE can go 500mb+ )
being pure CM10 this one is very different in style to the AOKP based roms, but is almost identical to smooth rom in terms of memory usage, although less responsive, on the up side is packed full os sound enhancing mods and an upgraded bravia engine for video
the rest are in no particular order
gsw5700 said:
What's more interesting is that identical apps on the galaxy S3 take up 2-3 times more memory. 1GB is fine on the N7 but you have to do a lot of fiddling to get the S3 running with 1GB. Must be down to the Tegra architecture. Smooth 5 runs great on the N7 (especially with greenify app)
Sent from my Nexus 7 using XDA Premium HD app
Click to expand...
Click to collapse
jesus am wandering how 512k devices ever ran !!!, android MUST have gotten a lot more bloaty since the 2.x days, otherwise nothing would have worked !!!
jubei_mitsuyoshi said:
jesus am wandering how 512k devices ever ran !!!, android MUST have gotten a lot more bloaty since the 2.x days, otherwise nothing would have worked !!!
Click to expand...
Click to collapse
You got it! There's a huge difference between the gingerbread days and now, i remember when my droid eris would use 200 megs of ram for the OS, now.....its a lot more. Why do you think new devices are getting 2gigs of ram, I'm guessing key lime pie will only use more and more memory to give us a better experience
Sent from my Nexus 7 using xda app-developers app
Triscuit said:
You got it! There's a huge difference between the gingerbread days and now, i remember when my droid eris would use 200 megs of ram for the OS, now.....its a lot more. Why do you think new devices are getting 2gigs of ram, I'm guessing key lime pie will only use more and more memory to give us a better experience
Sent from my Nexus 7 using xda app-developers app
Click to expand...
Click to collapse
Hmm have to take your word on better experience having just come to android.
PRIME runs at 266mb without gapps, thats a bloody good number for android, just needs the bugs fixed and a minimal/mico ( just playstore functionality, without the paid service ) gapps integrated so all the settings etc function, then we are talking as good as it gets with android.
Generally if you want something doing do it yourself but in this case am in the middle of becoming proficient in c/c++ ( again , its amazing when buried in hardware, pcb design, spice sims, matlab etc etc that one can just forget how to code, i always thought it would be like riding a bike WRONG! ) so learning java from scratch is out for at least 6 months, am very much in hope that PRIME does it for me ,
Before you go any further you should define exactly what you mean by "memory usage".
I challenge you to correlate your "memory usage" statistic to anything you can find in /proc/meminfo.
Go ahead, give it a try.
In any modern OS - including Android - 100% of DRAM is in use. The only thing which remains is some quibbling about whether you should give up file cache space for process memory space or kernel private memory, and the answer to those questions always depend on the nature of the workload.
The whole of dalvik is built on top of native shared libraries that are substantially smaller than the totality of shared libraries present in (let's say) a recent Linux distro. They can be memory mapped in copy-on-write or read-only fashion to a large number of process spaces, and so in fact it is a strategy of the "system_server" process to preload most of them. That way new activities spring to life quickly, rather than being required to demand-load and link everything from scratch.
Bottom line: it is an intentional strategy of android to "use up memory" right from the get-go. Most of that "used memory" is shared libraries that are mapped into activities as they come and go.
So, would I want to run engineering applications that require 800 MB of heap space on an android OS tablet with 1 GB of RAM? The answer is clearly "no" in that case, but mostly because Android devices are not targeted for that kind of work.
For comparison, BTW, my Win 7 x64 box that is nearly bare of applications (I only use it as a VM host) needs 1 GB of committed page space to sit there and do nothing. Android isn't doing so badly in comparison.
cheers
bftb0 said:
Before you go any further you should define exactly what you mean by "memory usage".
I challenge you to correlate your "memory usage" statistic to anything you can find in /proc/meminfo.
Go ahead, give it a try.
In any modern OS - including Android - 100% of DRAM is in use. The only thing which remains is some quibbling about whether you should give up file cache space for process memory space or kernel private memory, and the answer to those questions always depend on the nature of the workload.
The whole of dalvik is built on top of native shared libraries that are substantially smaller than the totality of shared libraries present in (let's say) a recent Linux distro. They can be memory mapped in copy-on-write or read-only fashion to a large number of process spaces, and so in fact it is a strategy of the "system_server" process to preload most of them. That way new activities spring to life quickly, rather than being required to demand-load and link everything from scratch.
Bottom line: it is an intentional strategy of android to "use up memory" right from the get-go. Most of that "used memory" is shared libraries that are mapped into activities as they come and go.
So, would I want to run engineering applications that require 800 MB of heap space on an android OS tablet with 1 GB of RAM? The answer is clearly "no" in that case, but mostly because Android devices are not targeted for that kind of work.
For comparison, BTW, my Win 7 x64 box that is nearly bare of applications (I only use it as a VM host) needs 1 GB of committed page space to sit there and do nothing. Android isn't doing so badly in comparison.
cheers
Click to expand...
Click to collapse
Hmmm
Well lets start with the last first, i run a heavily customized ( rt7 lite, wintoolkit, buclean ) windows 7 ( EE edition which i mastered ) on a asus g15w i7 8 gb geforce 470 , with all drivers in, on full aero , <>560mb mem usage for the system, can go down to 500 if you disable the nvidia startups and services but you lose the nvidia controll panel.
I totally take the point that memory usage in modern multi-core systems is friggin complex, obviously these memory stats are not supposed to be definitive in any way, but given all the tests are run on the same hardware with the same inbuilt prog they can be used as COMPARATIVE results, ie you can say rom x is more efficient than rom y given they do the same thing but with different memory results.
By definition any code abstraction away from 1's and zeros makes that code less efficient, an entire graphical os can fit into 1.8mb if written in x86 ASM, same code becomes <>20mb in C, 25mb in C++, 80mb+ in vm bytecode, the same pattern can be found in mem usage.
Any virtual machine no matter how clever ( and dalvik is bloody clever ) is a glorified interpreter, hence slower ( by a few factors ) than c/c++, which is itself slower by a few factors than ASM.
My opinion on caching is DONT, unless someone comes up with a really psychic piece of code that can for real predict the chaotic needs of the average human, all caching algorithms are just guessing, and do i trust the system to free up all that memory in time when something ( as you say ) calls up a massive heap or worst maloc's it direct, errrr no.
But thats just an opinion, am totally willing to recant if i see evidence and accurate benchmarks to the contrary ( and you seem to know your stuff, so if im way of the mark please enlighten me ! )
I have open this thread for all(me included)user with no idea about ART,so here are some infos about it,what is ART,how does it work,pro+cons.
Maybe its helpfull for some peeps here.Source: Android Police
Link p1
Link p2
-------------
Part 1:
It's fair to say that Android went through some chaotic years in the beginning. The pace of development was frantic as the operating system grew at an unprecedented rate. An as-yet undetermined future led to decisions that were made to conform to existing hardware and architectures, the available development tools, and the basic need to ship working code on tight deadlines. Now that the OS has matured, the Android team has been giving more attention to some of the components that haven't aged quite as well. One of the oldest pieces of the Android puzzle is the Dalvik runtime, the software responsible for making most of your apps run. That's why Google's developers have been working for over 2 years on ART, a replacement for Dalvik that promises faster and more efficient execution, better battery life, and a more fluid experience.
What Is ART?
ART, which stands for Android Runtime, handles app execution in a fundamentally different way from Dalvik. The current runtime relies on a Just-In-Time (JIT) compiler to interpret bytecode, a generic version of the original application code. In a manner of speaking, apps are only partially compiled by developers, then the resulting code must go through an interpreter on a user's device each and every time it is run. The process involves a lot of overhead and isn't particularly efficient, but the mechanism makes it easy for apps to run on a variety of hardware and architectures. ART is set to change this process by pre-compiling that bytecode into machine language when apps are first installed, turning them into truly native apps. This process is called Ahead-Of-Time (AOT) compilation. By removing the need to spin up a new virtual machine or run interpreted code, startup times can be cut down immensely and ongoing execution will become faster, as well.
At present, Google is treating ART as an experimental preview, something for developers and hardware partners to try out. Google's own introduction of ART clearly warns that changing the default runtime can risk breaking apps and causing system instability. ART may not be completely ready for prime time, but the Android team obviously feels like it should see the light of day. If you're interested in trying out ART for yourself, go to Settings -> Developer options -> Select runtime. Activating it requires a restart to switch from libdvm.so to libart.so, but be prepared to wait about 10 minutes on the first boot-up while your installed apps are prepared for the new runtime. Warning: Do not try this with the Paranoid Android (or other AOSP) build right now. There is an incompatibility with the current gapps package that causes rapid crashing, making the interface unusable.
How Much Better Is It?
For now, the potential gains in efficiency are difficult to gauge based on the version of ART currently shipping with KitKat, so it isn't representative of what will be possible once it has been extensively optimized. Thus far, estimates and some benchmarks suggest that the new runtime is already capable of cutting execution time in half for most applications. This means that long-running, processor-intensive tasks will be able to finish faster, allowing the system to idle more often and for longer. Regular applications will also benefit from smoother animations and more instantaneous responses to touch and other sensor data. Additionally, now that the typical device contains a quad-core (or greater) processor, many situations will call for activating fewer cores, and it may be possible to make even better use of the lower-powered cores in ARM's big.LITTLE architecture. How much this improves battery life and performance will vary quite a bit based on usage scenarios and hardware, but the results could be substantial.
What Are The Compromises?
There are a couple of drawbacks to using AOT compilation, but they are negligible compared to the advantages. To begin with, fully compiled machine code will usually consume more storage space than that of bytecode. This is because each symbol in bytecode is representative of several instructions in machine code. Of course, the increase in size isn't going to be particularly significant, not usually more than 10%-20% larger. That might sound like a lot when APKs can get pretty large, but the executable code only makes up a fraction of the size in most apps. For example, the latest Google+ APK with the new video editing features is 28.3 MB, but the code is only 6.9 MB. The other likely notable drawback will come in the form of a longer install time for apps - the side effect of performing the AOT compilation. How much longer? Well, it depends on the app; small utilities probably won't even be noticed, but the more complex apps like Facebook and Google+ are going to keep you waiting. A few apps at a time probably won't bother you, but converting more than 100 apps when you first switch to ART is a serious test of patience. This isn't entirely bad, as it allows the AOT compiler to work a little harder to find even more optimizations than the JIT compiler ever had the opportunity to look for. All in all, these are sacrifices I'm perfectly happy to make if it will bring an otherwise more fluid experience and increased battery life.
Overall, ART sounds like a pretty amazing project, one that I hope to see as a regular part of Android sooner rather than later. The improvements are likely to be pretty amazing while the drawbacks should be virtually undetectable. There is a lot more than I could cover in just this post alone, including details on how it works, benchmarks, and a lot more. I'll be diving quite a bit deeper into ART over the next few days, so keep an eye out!
---------------------------------------------
Part 2 in next post.....
Part 2
---------
By now you've probably heard about ART and how it will improve the speed and performance of Android, but how does it actually perform today? The new Android Runtime promises to cut out a substantial amount of overhead by losing the baggage imposed by Dalvik, which sounds great, but it's still far from mature and hasn't been seriously optimized yet. I took to running a battery of benchmarks against it to find out if the new runtime could really deliver on these high expectations. ART is definitely showing some promise, but I have to warn you that you probably won't be impressed with the results you'll see here today.
Reality Check
Let's be honest, benchmarking apps tend to be inaccurate and unreliable, often giving wildly varying results even when run in precisely identical situations. However, they are the only option available for recording meaningful and measurable values on performance. Further, since most popular benchmarks are built on the NDK (Native Development Kit), they won't gain any benefit from running under ART. Despite these limitations, there are some interesting and unexpected results that help us learn a little more about the current state of performance.
How The Benchmarks Were Run
Each benchmark was run at least 4 times on a completely stock Nexus 5 (it isn't even rooted) with both Dalvik and ART. To ensure there was no interference from apps at startup, a minimum of 5 minutes was given after a reboot before any tests were run. In addition to the 6 benchmarking apps listed below, I also tried 2 browser benchmarks (SunSpider & BrowserMark) in Chrome, but neither displayed significantly different scores. So, let's get to the results.
Linpack for Android
One of the key factors in getting good test results is knowing that the tools are measuring the right thing. While many of the benchmark apps target the NDK, a few stick to the SDK. The first and most consistent among them is Linpack for Android, a port of the already popular benchmarking app used throughout numerous computing platforms. It produces a score by performing a series of calculations on floating point numbers. I think this is an obvious choice after reading the description, "This test is more a reflection of the state of the Android Dalvik Virtual Machine than of the floating point performance of the underlying processor." Thanks to ART, scores are 10%-14% higher than they would be with Dalvik. Not too shabby…
Real Pi Benchmark
Calculating digits of Pi is another popular way of stressing a processor, and particularly suitable because most methods stick to integer calculations and avoid floating-point math entirely. Along with Linpack, this gives us coverage of both basic mathematical operations. On top of it, Real Pi happens to use native code to perform the AGM+FFT formula, but uses Java for Machin's formula. On the native side, ART came out about 3.5% faster, probably due to interface optimizations rather than mathematical performance. More importantly, testing with the java code turned out to be 12% faster. (link) Note: in this test, lower numbers are better.
Quadrant Standard
The previous tests are highly specific to mathematical performance, so it's time to branch out to test more of the system. Both Linpack and Real Pi show some positive improvement with ART, but Quadrant gave a result that borders on the amazing, perhaps even too good. The CPU score is off the charts for ART, almost doubling that of Dalvik, which is substantially better than even the most optimistic estimates we've heard so far... While tests for I/O, 2D, and 3D rendering show fairly negligible differences, Dalvik does take an oddly high 9% advantage in the memory test.
3D Mark
I was leery of using a benchmarking app that clearly focuses on the NDK, as it theoretically shouldn't be affected very much by ART. However, as the tests were run, an interesting pattern emerged where the Dalvik runtime repeatedly held a slight advantage. It's difficult to attribute a reason for Dalvik to do better here, but I'm open to theories.
AnTuTu Benchmark
Breaking performance down even further, AnTuTu helps to expose a pattern. It's increasingly clear that ART is making significant strides with floating-point operations, but doesn't usually turn out huge gains for integers. A strong showing in "RAM Operation" also hints at better use of caching as opposed to just raw memory I/O. These high scores indicate areas where the Dalvik virtual machine was probably very expensive, causing more extensive overhead. The other results weren't particularly remarkable except for the Storage I/O, which might suggest a couple of specific optimizations. One significantly low score appears for UX Dalvik, but it's not clear what AnTuTu is measuring, so this may not be particularly relevant.
CF-Bench
For the ultimate in number production, Chainfire's own benchmark tool takes out a lot of the guesswork by performing tests built on both the SDK and NDK. Again, native code displays a small but curious advantage on Dalvik. Here we can see the integer calculations are swinging back towards Dalvik, as well. Mostly confirming the pattern, floating-point operations demonstrate a significant speed gain, this time in the 23%-33% range.
Other Interesting Measurements
Measuring the first boot after switching runtimes isn't your typical test, no doubt, but the time it takes is quite striking. I wanted to record just how long it took to complete both the App Optimization step and then the total time to actually reach the unlock screen. When I ran this test, I had 149 apps installed.
The Other Stuff
While numbers can be helpful, they don't tell the full story. Benchmarks usually push the hardware to work as hard as possible for a few seconds, then switch to a new test that does the same thing. Sadly, this ignores details that aren't easily measured. I don't have a good way to measure the smarter timing of memory management (especially garbage collection) or better handling of multiple threads. While I can't show numbers for these things, I can demonstrate them. The classic test for a browser simply requires flinging the page as fast as possible and watching it try to keep up. After stress testing Chrome for Android with the mobile version of David's gigantic HTC One review, it turns out that even the supercharged SoC of the Nexus 5 can't quite keep up while running on Dalvik… ART, on the other hand, never lost a pixel. Take a look for yourselves.Videos below.
Fast scrolling with dalvik: http://youtu.be/JGyktLPvORU
with ART: http://youtu.be/L9lpCssSdMc
To be fair, switching to the desktop version and giving a single fling will easily send you into blank screen territory, but it's still obvious that the renderer catches up faster on ART than on Dalvik. When more optimizations are in place, maybe we won't be far off from flawless scrolling even in the desktop version. For another demonstration, a user by the name of spogbiper has posted his own side-by-side comparison with two Nexus 7s. The one running ART seems to be more responsive.
Summary And Conclusions
The numbers and the videos together paint a picture of where ART stands today. It will definitely make a difference, but its current incarnation just hasn't matured enough to deliver significant gains. Floating-point calculations and basic responsiveness are obviously reaping the benefits of the new runtime, but that's about it. There's little or no overall improvement for integer calculations, most regular code execution, or much of anything else. In fact, it looks like gamers would be better served by sticking to Dalvik, for now.
Why aren't the benchmarks blowing us away? If I were to make a guess, it's probably because the first goal in developing ART was to make sure it was functional and stable before the heavy optimizations came into effect. If that's the case, there is probably quite a bit of code for error-checking and logging just to ensure everything is operating as it should, which might even be responsible for more overhead than we had with Dalvik. Even in the places where ART doesn't outperform Dalvik, the numbers tend to remain reasonably close. As subsequent versions of the runtime emerge from Mountain View, we should expect to see the performance gap growing wider as ART pulls ahead.
Now for the real question: is it worth switching to ART right now? Google obviously isn't recommending it for regular users, and I tend to agree. While ART seems very solid and I feel like responsiveness is better - possibly just the placebo effect - there are still circumstances where it is unstable and causes apps to crash. If there is even a single instance where you have to switch back to Dalvik to get an app to run correctly, that inconvenience far outweighs the minimal performance gain you might have had. Once I've finished this series, I will probably stick to Dalvik for the remainder of KitKat; and I imagine most people will be better served by doing the same.
Introducing ART from: source.android.com
ART is a new Android runtime being introduced experimentally in the 4.4 release. This is a preview of work in progress in KitKat that can be turned on in Settings > developer options. This is available for the purpose of obtaining early developer and partner feedback.
Important: Dalvik must remain the default runtime or you risk breaking your Android implementations and third-party applications.
Two runtimes are now available, the existing Dalvik runtime (libdvm.so) and the ART (libart.so). A device can be built using either or both. (You can dual boot from Developer options if both are installed.)
The dalvikvm command line tool can run with either of them now. See runtime_common.mk. That is included from build/target/product/runtime_libdvm.mk or build/target/product/runtime_libdvm.mk or both.
A new PRODUCT_RUNTIMES variable controls which runtimes are included in a build. Include it within either build/target/product/core_minimal.mk or build/target/product/core_base.mk.
Add this to the device makefile to have both runtimes built and installed, with Dalvik as the default:
PRODUCT_RUNTIMES := runtime_libdvm_default
PRODUCT_RUNTIMES += runtime_libart
------------------------------------------------------------
MANY GREEEETZ+STAY ADDICTED!!!!
Were hi find gapps art?thanks
Sent from my LG-E610 using XDA Premium 4 mobile app
velosa said:
Were hi find gapps art?thanks
Sent from my LG-E610 using XDA Premium 4 mobile app
Click to expand...
Click to collapse
look in app section here in l5 forum,i have uploadet a minipack,also there are more infos.
-CALIBAN666- said:
look in app section here in l5 forum,i have uploadet a minipack,also there are more infos.
Click to expand...
Click to collapse
Thanks
Sent from my LG-E610 using XDA Premium 4 mobile app
thank you very much for your work about ART presentation ! very nice and perfect ! :good:
conclusion : don not use ART for the moment on CM11 ! ok ! wait more stability ...
i've downloaded gapps kk from cr3pt thread for cm 11, while i'm waiting for more stability i wanna ask you if they are art compatible.
Best presentation of ART!
When the gapps are odexed than yes.
STAY ADDICTED,GREEEETZ!!!