Broadcom announced the release of the raspberry pi's gpu source. Heres what a user on arstechnica had to say about the release; it is most likely whats going to happen in our device and the reason why it will probably never be overclockable unless broadcom alows it to:
Unforunately it's not really open source at all. I'm not too familiar with Broadcom's SoC, but it seems like the bulk of the driver is running on a side processor. The open sourced code simply passes commands and data back and forth between the user's code and the real driver doing the work. Now, a lot of hardware uses firmware these days, so it's tempting to think of what's running on the side processor as firmware, but when the driver running on the main CPU is just an RPC layer and most of the magic (including things like the shader compiler) is happening on the side processor then I would think that any person familiar with the hadware/software interface would rightly say that the "driver" is really the code on the side processor and not the open source code.
Now, having the CPU layer as open source is better than not having it, but this is nowhere near an open source GPU driver. TI does the same sort of thing with their DSP and some auxiliary processors on their OMAP chips, most of the "driver" is running on another processor.
Related
As far as I know, the attempts at making OpenGL ES drivers for the TyTN II have been discontinued. But why did the developers decide to abandon the project?
Did the developers of the drivers ever release the source of their work for any other developers to pick up and continue where the first developers left off?
Has it ever been confirmed that the hardware actually does/doesn't exist and is/isn't wired in such a way that a driver can be used to add OpenGL ES support? I mean, even if the chip exists and is accessible by the OS, it doesn't have to mean that the chip is connected to the display of the device in such a way that you can utilize its features.
If so, it's be pretty much like having a computer with a powerful video card, but without the monitor connected to the card.. Instead the monitor is connected to a poor performance card in the same computer, making it possible to use the powerful card to generate "screenshots" which are then simply displayed through the poor performance card, but still without achieving the same performance as you would if the monitor had been connected directly to the powerful video card.
Can someone update me a bit on the progress, or perhaps direct me to the right place to read more? I used to closely watch the htcclassaction.org website, but now that website seems to be dead, without any info about why the development just stopped.
I know that this device is getting old and pretty much belongs to the history books now, but now that android is being ported for the TyTN II, the device may have a new chance of seeing daylight again. Perhaps someone is willing to take a look at the driver issue again and make some TyTN II video driver for android? I certainly hope that the development didn't stop because it was simply impossible to make a driver. After all, you cannot write a driver for hardware that doesn't exist.
Thanks.
yeah, up for this, and is there any driver that will install on the default wm 6.1? thanks for the upcoming reply people
I don't think it's a matter of history yet. The newer devices that have physical keyboards have the same chipset and, therefore, same performance issues they TyTN II has. That extra 124mhz the Touch Pro/2 have aren't doing much to remedy the problem.
Excuse me, what's the currently true problem ?
Some OpenGL ES applications don't runs ? ( an example ? ) or they runs too slowly ? ( an example ? ).
In my "beta" configuration everything seems to work decently.
Regards,
Stefano G.
I think some people were working on it most recently under the development and hacking thread...search Neos2007 open vg drivers...They managed to get some accelleration in much newer msm7201A devices with the ati d3d drivers but when I tried it on my tilt it had a device exception error when I ran some d3d samples...What worked for me for now was:
1. Disable manila/chome
then
2. intall Gfxboost 1.1 by chainfire
then
3. install neos2007 driverpack 2A
then
4. install HTC-CA drivers
By following the procedure above I was able to actually run the program GL benchmark and most of the d3d samples (except text) but somehow some d3d samples break the drivers too...
I have tried some Kaiser " SuperRam - 101/102 MB " roms,
nobody was completely compatible with HTC-CA drivers.
With my rom ( no SuperRAm ) I obtain the best result replacing
the DLL "ahi2dati.dll" with a renamed copy of "ahi2dati_dm.dll".
Regards.
The possibility of the micro USB to HDMI capabilities have been discussed to some extent on these forums but this seems to have slipped through the cracks. I don't know if this has actually been tested yet but I'd figure it should be an easy thing to test for our custom kernel modders.
zulu99 said:
Inspecting the source code froyo 2.2 of galaxy s, the sii9234 driver for hdmi output is not compiled.
This driver is compiled and work in the galaxy tab. the probe of this is the fact that if you see the dmesg log you can see the sii9234 loaded and work.
Can our developer see the dmesg of galaxy s and confirm that sii9234 is not loaded ?
If it is not loaded can our developers compile this driver inside the kernel of galaxy s.
i don't know how to compile the kernel but for a developer is simple to include this driver, you can copy the line relative to SiI9234 from config of galaxy tab and past in the source code of galaxy s.
both source code have the file MHD_SiI9234.c relative to the sii9234 transmitter for the hdmi output produced by silicon image
Click to expand...
Click to collapse
As we actually have some people on these forums with the micro USB-HDMI cable, I think this should be explored unless it's already been tested or in some other way proven that it's simply not possible.
It's been proven to be impossible already. There's no wire connecting the usb to the display port (I think I worded this incorrectly, but essentially I am trying to say that it's physically impossible without welding an extra wire or two).
Edit: Wait. Re-reading the usb to HDMI thread in accessories again. Was the previous information false?
Thread moved to Q&A.
Is there anyone who can do one of the following???
- Point me to a HOWTO on how to check out and build an android kernel? I can build linux kernels no problem, in fact I have built my own distribution from scratch. But finding and building the SGS source is a nightmare, I have spent way too many hours on it already.
- Get a kernel developer to build SiI9234.c?
In may of last year, AMD made announcement of their entrance into the ARM microprocessor line, and the hybridization of ARM and X86.
Long before this, I had wondered if it would be possible for a computer to operate on a hybrid architecture. To task a certain amount of its work to a powerful CISC type processor as well as add a super energy efficient RISC chip to handle co-processing .
This type of heterogeneous computing is being done already with ARM processors using 2 different architectures. However, the two are similar enough that they are assembly code compatible. The ARM big.LITTLE architecture is close to what I am talking about, but still both are ARM v7 processors so not really an ISA hybrid?
I'm talking about an ultra low power, hybrid architecture mobile phone, running an OS like Android, that uses an Intel / AMD x64 processor as its main CPU. With a similarly powerful ARM coprocessor.
A CPU with the performance and compatibility of a desktop PC. More instructions per second at a lower clock, fewer cores equal to the same computing in equivalent ARM chips.
Better software compatibility. No more writing custom kernels for custom chipset. No more blank won't run on blank because of the chipset.
And at the same time, by adding a coprocessor by ARM, an enormous amount of resources can be offloaded to the copro, which means less power consumption, less heat generated etc.
Say your phone had two chips.
One containing a very powerful x86 system with CPU, GPU, SDRAM interface etc. The main " north bridge ".
And a second chip or " south bridge " containing an ARM processor, cellular baseband, DSP , audio, video etc etc.
The south bridge could work independently of the north, allowing for tasks like making calls or processing multimedia to be done on its own CPU; leaving the application tasks strictly to a bad ass 64 bit Intel processor that could run full on i386 Linux or Windows?
Great.maybe the future of Mobile Computing devices
Hey Guys
First of all: I realize that this is a rather long text, so I appreciate the effort of everyone who is going to read it!
Also, I asked a questions about 2 weeks ago, which was related to this topic, but was very specific about android wear (which I gave up on since then!).
So, actual post:
I want to build, or already am building an informational system for my motorcycle.
As the result of my work, I imagine a display (about 7 inches) in the dash of my motorcycle. It shall display information from my Smartphone (for example notifications about incoming calls etc.) as well as giving me the possibility to control the music on the smartphone (Android 5.1).
Also, I want to display further information, like speed, average speed, altitude etc. (hope you got the idea, basically just an advanced trip computer).
I started developing something, but ran into issues. I will explain my two concepts or ideas I had so far and explain, what the issues were I ran into. I then hope, that somebody here has a solution for my problem (which includes recommending hard- and software).
Firstly about my skills: I am experienced in programming "low level hardware", like Atmel's AVR Series (in plain old C) and developing the associated hardware for it. Also making custom pcb's at home isn't a problem for me, as long it doesn't come to some fancy BGA or SMD packages
On the programming side I am experienced the most in Java (and Android, which is basically Java of course). I know also C# and the .NET framework.
But I am willing to learn something new
The two ideas I had so far differed on the way how I wanted to let the raspberry pi (which I wanted to place in the cockpit) communicate with the smartphone.
In both concepts, I planned to have a raspberry pi with attached display in the cockpit on which I wanted to run a JavaFX application (already started programing). This application would then communicate with the smartphone over:
Idea 1: Java serialization:
I wanted to communicate over command objects. So for example I'd have an object for asking the altitude from the smartphone.
I'd then serialize this command object on the pi's side and deserialize on the smartphone. This isn't a problem, because there's java on either side (already got that piece working).
The smartphone would, after receiving and deserializing the object, get the actual altitude from the GPS sensor, pack the result in an answer-object, serialize it and send it back to the pi.
The issues I ran into were the following:
-Java Bluetooth library: I wasn't able to find a good, up-to-date, java library for communicate over Bluetooth in java. I then stuck to RXTX Library which did the job, but I always had the feeling of doing something "not so good". In particular I didn't want to just write on a COM-Port (which is emulated from the Bluetooth-module), because I had the feeling that COM-Ports may change after reboots if the OS feels like it, and I didn't want to build something which needed constant "tinkering". Also, writing to COM-Ports in 2015 just feels wrong, but this may be my personal problem
Idea 2: HTTP and Web Sockets
The basic idea was to have a webserver running on the smartphone and offering a REST-like API which I could access from the pi.
I also got this concept working, like so:
By using the NanoHTTPD library (from github) I was able to start a webserver on the android device. When then someone issued a POST-request on, for example, <IP>:<port>/api/music/next, the WebServer would receive this request and switch to the next song.
Actualizing data on the pi which changes often, for example the altitude, would have been achieved by using a WebSocket connection between the Java-App on the pi and the android webserver (which I also got to work).
I figured out that it would be a power consumption problem to let the smartphone offer a wifi hotspot (I don't want to have to connect the smartphone to cables on the motorcycle), so I decided to let the pi start a wifi access point (which isn't a power problem, because the pi is connected to on-board-power of the motorcycle).
However I then realized that the smartphone won't connect to an access point which doesn't offer internet access but only LAN-access.
And even if there was a way to force the smartphone to let it connect anyways, it isn't guaranteed that this will work too on future devices. And: The whole notification-stuff would have been needless, because as long as the smartphone is connected to a "dead-end wifi", it wouldn't receive emails or whatsapp-messages.
Idea 3: Using Bluetooth low energy:
It seems like the new, modern way, to let devices communicate over Bluetooth is to use Bluetooth low energy (BLE). (But I never worked with it before!).
However, there seems to be little to no support on raspberry pi for it, and it seems to be impossible to find a library for java which helps in using BLE. (If anyone knows one, please let me know).
I then thought about replacing the raspberry pi with an android board, because android has support for BLE. But I wasn't able to find a board which is supported from android 5.1+ and offers support for BLE. Even the Odroid-boards don't seem to support android >4.4 and BLE.
Summary:
In general I liked the second and third option much better. It seemed to be the the more versatile, modern way. The first way felt a bit like a hack.
However I found those problems I presented above, and until now, I couldn't think of a way around it.
If anyone here:
1) Solved this problem already
2) Knows a really good, NON-HACKY, community supported, Java (BLE) Bluetooth library
3) Knows a language or framework which would be well suited to solve the problem
4) Has another good idea how to solve it
Please let me know!
I just want to build something sophisticated, (which I could maybe make an open source project out of it) which isn't hacky.
I mean, the problem has to be solvable, look at the Pebble smartwatch. They also solved it without android wear.
I really want to emphasise that this is an open question. I am not limited / fixed on Java, Raspberry pi or anything.
I those have two requirements.
1) I don't want to connect the smartphone to a cable, either for data or for power
2) The solution needs to be something power saving, so no hotspot on the android device
3) Non-hacky, sophisticated solution
Best regards
Me =)
PS: As English isn't my native language, I maybe put some sentences wrong or wasn't able to express something clearly and unambiguous.
Please feel free to ask, I'd be pleased to clear any questions!
Any updates?
Hi!
I know this is an old thread, but I'm struggling with a similar issue - except I want to use it for roadcycling. Did you have any luck with your project?
All the best
Marius
Hello all!
So I've been using Mac for the last years and really enjoying it, but now wanted to use Linux (more specifically Fedora) as my main OS, rather than MacOS.
The machine for doing so it's a late 2013 15" MacbookPro with NVIDIA graphics.
For getting everything setup it took a while as a lot of drivers are proprietary, but manage to get most things working (expect for sleep, that for an unknown reason is broken and the Mac takes more than 5 min to wake from sleep, and FaceTime camera not working).
The problem however is that the machine is using the NVIDIA GPU, which causes it to get very toasty and drain battery faster.
I would like to enable the iGPU, but keep the NVIDIA GPU available as sometimes I connect an external monitor using the HDMI port.
I am newbie to Linux and have little knowledge but want to try everything possible to make Fedora work as best as possible on my computer. The procedures I tried were:
-Install rEFIND and try to use an set_apple.efi thing but probably did it wrong and did not work,
-Try to modify NVRAM variables to make the mac use the iGPU by default, but when booting to Fedora still only show up the NVIDIA GPU
Any help is welcome, bur please don't say for me to stay on MacOS or that is impossible as I read from people who manage to succored.
Thanks in advance!