What's new
  • ICMag with help from Landrace Warden and The Vault is running a NEW contest in November! You can check it here. Prizes are seeds & forum premium access. Come join in!

Gavita Pro DE vs. ePapillion: irradiance uniformity results

Go read high times magazine in July for the DE test.
I stopped reading High Times when I was teenager. I prefer science, not poorly written articles from "experts." Please don't try to suggest whatever is written in there can be compared to the testing this thread is based upon. Most of the articles in their magazine are full of incorrect and wholly unsubstantiated claims, based mostly on hearsay and conjecture with a complete lack of references and citations.

The research this thread is based upon shows how single fixture uniformity really is for Gavita vs. ePapillion. Sorry if you don't like the results, but that's life, buddy.

If you want to post the High Times article feel free, and I'll read it with an open mind.

For what it's worth, the uniformity tests by GrowerHouse basically agree with the uniformity tests I posted about in this thread, for Gavita and ePapillion. That is, a single Gavita fixture has considerably less uniform irridaince than ePapillion, and Gavita has a greater 'hot spot' then ePapillion.

As for light spacing: here is how it is done. Have you ever seen a lights interaction calculation with a bigger grid than just between the center lights? That may be good for greenhouses, but not for climate rooms. We calculate room uniformity, not just in the center.

https://www.dropbox.com/s/ahg4wvcoqgdqtt5/sample%20light%20calculation%20for%20Howard%20Steven%20Soto.pdf?dl=0
I'll check out that PDF, but yes, I have paid for the exact type of calculations you're referring to, from LTI Optics (http://www.ltioptics.com/en/index.html) and Cycloptics (with Photopia and their in-house software).

I've spent over $2K for such calculations in the past for Cycltopics luminaires. For fixture number, placement, and distance to canopy to achieve specific irradiance (umol/s/area2 in PAR range) values and uniformity (>90% min/max) over the whole room, defined at bottom and top irradiance planes, not just a single plane of irradiance values.

However, this thread isn't about radiation uniformity from a series of fixtures. This thread is about uniformity from single fixtures. And I wrote in the start of this thread that when multiple fixtures are used uniformity is increased.

And no matter how you try to slice the data from the 3rd party testing this thread is based upon (including two Ph.D. plant physiologists, one of whom is famous and specializes in indoor growth applications), ePapillion has better uniformity as a single fixture, and as an array of fixtures both Gavita and ePaillion can provide good uniformity, but it's still easier to provide greater uniformity with ePapillion (due to the uniformity of each fixture int he array).

Because like I've written to you a few times, there are many ways to grow Cannabis commercially. And paths are needed for various growing methods and styles (e.g. RDWC, trees, etc.), where rolling benches aren't feasible. I know you like to think there's only one way to grow commercially, but that's not the case.

Rolling benches are great when they're well suited to the growing method used, but they're not well suited for all growing methods.

I've seen you post that picture before, and it doesn't impress me any more now then it did then.

You ask, "why paths?" A few reasons are:
- Work efficiency
- Air movement
- Radiation reflection of walls to lower canopy
- Hard plumbed growing systems (like RDWC)
- Trees (e.g. 6' tall plants with 10+ gallon substrate containers)
- Etc.
 
Last edited:
Beta Test Team said:
About Gavita vs. ePapillion, I totally neglected to mention that the ePapillion actually emits more umol/s within PAR than Gavita, by about 1%.
I appreciate your work BTT. And I can get all uptight when people call me out. But with love, i call horse shit.
No one is calling me out, but both you and Whazzup seem to be confused, as well as Whazzup has an agenda and is quite biased (he just won't admit when he and Gavita are wrong, as seen in this thread and the Gavita thread, where many times I proved his claims wrong on various topics that he simply won't admit).

There is no way your confidence in this number, or this testing, can allow you to say with a high degree of confidence that there is any appreciable difference between these reflectors, if you are being honest.
It depends upon what difference you're referring to, i.e. if you're referring to differences in radiation uniformity from a single fixture you're wrong - I can say with a high degree of confidence there is a big difference between them, without the need to test hundreds of fixtures - however, if you're referring to the point I made you quoted above, that the ePapillion emits about 1% greater PAR range radiation (umol/s) than the Gavita, my confidence is not as great as it is about uniformity, but I'd still bet that ePapillion has greater PAR range radiance than Gavita due to the testing this thread is based upon, however, you're correct that for greater certainty more fixtures and lamps should be tested.

The fixtures where tested inside an integrating sphere in terms of PAR range radiance (umol/s), by an accredited 3rd party lab using the most recent industry protocols and certified equipment. The lamps in both cases are the same (Philips DE HPS), though different physical lamps.

The differences in PAR range radiance can be from difference in ballasts as well as reflector design, and differences in lamps (even though both lamps are the same make and model).

Statistically you would need to test several hundred randomly selected reflectors to even get close to "1% accuracy" with any meaningful confidence. And you would need quite a few other people repeating this same test to do it with any degree of confidence that a third party should respect. there are all kinds of biases in a one man operation that make it hard to have confidence in the results.
None of this testing was done by me or my colleague. There were no biases in the testing, it's simple science: the fixtures were placed inside an integrating sphere that measured the PAR range radiance (umol/s) emitted from the fixtures. Here's what I wrote:
About Gavita vs. ePapillion, I totally neglected to mention that the ePapillion actually emits more umol/s within PAR than Gavita, by about 1%.

ePapillion = 1,767 umol/s in PAR range emitted from the fixture.
Gavita = 1,751 umol/s in PAR range emitted from the fixture.

So ePapillion emits a little more useful radiation than Gavita. However, when accounting for input watts (joule/s), both fixtures have a photosynthetic efficiency of about 1.7 umol/s per joule within PAR range.
And I never wrote anything about "1% accuracy." The claim about 1% is about PAR range radiance, i.e. umol/s in PAR range exiting both reflectors (tested inside an integrating sphere).

I mean just looking at your significant figures there is a problem. And you have how many quantum detectors to verify which one is lying?
What is the problem with the figures I quoted above?

The fixtures where tested in an interrogating sphere in terms of the PAR range radiance differences. This is where you're getting confused. And I didn't say either companies were lying about the PAR range radiance (that I recall), but I did say Gavita isn't telling the whole truth about the uniformity from a single fixture.

To see the protocol for testing uniformity from a single fixture see this link: https://www.cycloptics.com/sites/default/files/USU_functional_efficiency.pdf

Now DO NOT THINK I am shitting on what you are doing. Just the opposite. Who the fuck else is doing this BASIC science we need except you! And who is giving it AWAY!? The work you do is real and important, IMHO.
I don't think you're doing that, I think you have good intentions (I can't say the same for Whazzup), but I do think you're confused and drawing incorrect conclusions based upon that confusion.

Your post doesn't bother me at all, criticism is always welcome when it's correct criticism. I'm happy to be corrected when the corrections are correct.
 
Last edited:
BTT i was strictly refering to only you statement that this reflector system emits 1% greater PAR than this other one.

I was not discussing or addressing the uniformity discussion. Although I find it interesting. Its not like gavita cannot measure this independently. And it sure would be neat if they actually took that information and formed a better reflector. Cause pretty much every reflector I have seen sucks. So give no ground BTT. just don't be carried away, as most humans have a tendency to.

So its not that I am confused. It might be that this medium of communication is unclear. Or perhaps my use of it. So let me be clear.

It depends upon what difference you're referring to, i.e. if you're referring to differences in radiation uniformity from a single fixture you're wrong
I was not referring to uniformity.

if you're referring to the point I made you quoted above, that the ePapillion emits about 1% greater PAR range radiation (umol/s) than the Gavita, my confidence is not as great as it is about uniformity
The PAR emittance was what i was referring to.

Now I think it is fair for you to generally prefer one reflector over the other due to observations in your testing. And it is not "pick on BTT day", but, i will now pick on BTT,

None of this testing was done by me or my colleague. There were no biases in the testing, it's simple science: the fixtures were placed inside an integrating sphere that measured the PAR range radiance (umol/s) emitted from the fixtures.
no biases? None at all? HORSESHIT!
I am not talking about political biases or other emotional agenda crap. The history of science is a history of discovering biases in scientific literature, experiments, and studies.
for example?
The fixtures where tested inside an integrating sphere in terms of PAR range radiance (umol/s), by an accredited 3rd party lab using the most recent industry protocols and certified equipment. The lamps in both cases are the same (Philips DE HPS), though different physical lamps.
Which reflector was tested first? was this random? were both bulbs new or burned in? was the selection of the bulbs for the fixtures randomized? by chance was the lamps swapped and the test rerun? How many times was the test rerun to build a statistically significant result and proper understanding of deviation data?

I am very sure you know very well how ANY study or experiment can be pulled apart. And that very often can be important. and it can sometimes be used to attack for political and human reasons. Please understand the point of these questions in a proper context.

And finally about significant figures. As I understand what was done here you did one experiment. thus, all your data is good to one significant figure. you are assuming 4 digits of accuracy in your PAR data. It seems (at first glance by some guy on the internet) that level of accuracy (1,767 vs 1,751 with 100% accuracy and 100% confidence, which is utter bullshit) is unsupported by your methodology. it would even be inaccurate to say 1.7x10^3 for either of them. you should say 2x10^3. witch literally means the PAR value is somewhere between 1000 and 3000. all equally likely.

A P.S. to whazzup
Software modeling is fun and interesting. But ALL OF IT needs to be understood as LIES. One needs constant Physical testing ($$$) to understand the lies that software is telling us. Confidence in software is stupid. Always bias to physical reality.

BTT is telling you that sofware modeled clients are not getting what the software says. That is 100% expected IMHO. Not Shitty software would give the user a range of expected values. Good software would put a confidence on that range. Great software would have a feed back loop to try and understand why there are variations between the two and move the software toward better modeling.
 
BTT i was strictly refering to only you statement that this reflector system emits 1% greater PAR than this other one.

I was not discussing or addressing the uniformity discussion. Although I find it interesting. Its not like gavita cannot measure this independently. And it sure would be neat if they actually took that information and formed a better reflector. Cause pretty much every reflector I have seen sucks. So give no ground BTT. just don't be carried away, as most humans have a tendency to.

So its not that I am confused. It might be that this medium of communication is unclear. Or perhaps my use of it. So let me be clear.
You are (or at least were) confused, please re-read my post to you. Also, you're the one using capitals (yelling) and cursing, not me, so trust me when I say it seems you're the one getting carried away.

The reason I wrote you're confused is you thought (and seem to still think) I did the testing this thread is based upon (I nor my colleague did that testing), and, you thought the PAR range radiance (umol/s) values were measured on a flat plane by me (they were not, they were measured in an integrating sphere).

LargePrime said:
Beta Test Team said:
if you're referring to the point I made you quoted above, that the ePapillion emits about 1% greater PAR range radiation (umol/s) than the Gavita, my confidence is not as great as it is about uniformity
The PAR emittance was what i was referring to.

Now I think it is fair for you to generally prefer one reflector over the other due to observations in your testing. And it is not "pick on BTT day", but, i will now pick on BTT,
No disrespect intended LargePrime, but you're not picking on me, it seems instead you're kind of confused.

And again, the testing this thread is based upon was not done by me. All testing was done by two different independent 3rd parties. Again, please read the study this thread was based upon (from PlosONE).

LargePrime said:
Beta Test Team said:
None of this testing was done by me or my colleague. There were no biases in the testing, it's simple science: the fixtures were placed inside an integrating sphere that measured the PAR range radiance (umol/s) emitted from the fixtures.
no biases? None at all? HORSESHIT!
I am not talking about political biases or other emotional agenda crap. The history of science is a history of discovering biases in scientific literature, experiments, and studies.
The reason I wrote there are no biases is because there aren't any. It's simple numbers. Two tests. Two sets of numbers. And again, all testing was done (in terms of PAR range radiance) by an accredited 3rd party lab using the most recent industry protocols and certified equipment. (Read the PlosONE paper.)

LargePrime said:
for example?
Beta Test Team said:
The fixtures where tested inside an integrating sphere in terms of PAR range radiance (umol/s), by an accredited 3rd party lab using the most recent industry protocols and certified equipment. The lamps in both cases are the same (Philips DE HPS), though different physical lamps.
Which reflector was tested first? was this random? were both bulbs new or burned in? was the selection of the bulbs for the fixtures randomized? by chance was the lamps swapped and the test rerun? How many times was the test rerun to build a statistically significant result and proper understanding of deviation data?
All of those questions about testing method are erroneous, e.g. testing order doesn't matter, randomization of testing order doesn't matter, the lamps used were those that came with the luminaires, and yes, the lamps were properly prepared (in this thread I posted where you can find the full testing protocol).

Standard deviation isn't relevant because the focus on the research wasn't statistical analysis, and I assume the cost of running such research was prohibitive, however, I agree it would be nice to have more thorough analysis (such as sample size specifically), which is why I agreed the confidence level in the results isn't very high. Also, note that statistical analysis isn't a requirement for a proper study.

And as a famous saying goes, and holds water for this issue: "If your experiment needs statistics, you ought to have done a better experiment" - Ernest Rutherford

LargePrime said:
I am very sure you know very well how ANY study or experiment can be pulled apart. And that very often can be important. and it can sometimes be used to attack for political and human reasons. Please understand the point of these questions in a proper context.
Please understand many of your points have no validity. See above.

LargePrime said:
And finally about significant figures. As I understand what was done here you did one experiment. thus, all your data is good to one significant figure. you are assuming 4 digits of accuracy in your PAR data. It seems (at first glance by some guy on the internet) that level of accuracy (1,767 vs 1,751 with 100% accuracy and 100% confidence, which is utter bullshit) is unsupported by your methodology. it would even be inaccurate to say 1.7x10^3 for either of them. you should say 2x10^3. witch literally means the PAR value is somewhere between 1000 and 3000. all equally likely.
The accuracy of the PAR range radiance measurements relates to the accuracy of the sensor, not repetition or sample size. What one can assume is less than 5% error margin from the sensor. What you claimed about accuracy is just wrong.

The point you should be making is that of sample size and mean data (not accuracy of measurement), which is why I agree with you that the confidence in the test results isn't very high.

And like I wrote many times to you and others in this thread, I did not do this testing.

LargePrime said:
A P.S. to whazzup
Software modeling is fun and interesting. But ALL OF IT needs to be understood as LIES. One needs constant Physical testing ($$$) to understand the lies that software is telling us. Confidence in software is stupid. Always bias to physical reality.
Sorry, LP, but I don't agree with that at all. I think maybe you don't understand how this model is carried out (the physics and math).

The modeling whazzup refers to is very valid, and it's not lies. That said, real-world measurement should always take place to verify modeling, because there can be errors and inaccuracies in the modeling. But to call all modeling as lies is a bridge too far.

While I personally don't like whazzup at all, mostly because he refuses to accept when he's wrong, and refuses to accept that Gavita makes incorrect claims because, well, Gavita is making said claims, and I think he needs to better understand the topics he professes to understand, much of what he writes is correct, and Gavita makes a good product, and their modeling is very valuable.

LargePrime said:
BTT is telling you that sofware modeled clients are not getting what the software says. That is 100% expected IMHO. Not Shitty software would give the user a range of expected values. Good software would put a confidence on that range. Great software would have a feed back loop to try and understand why there are variations between the two and move the software toward better modeling.
Please don't assume to know what I claiming, because I don't agree with much of what you've written in this thread. But I do agree the modeling software should include statistical analysis, such as what Photopia provides, from LTIOptics.
 
Last edited:

timmur

Well-known member
Veteran
The way you can tell this test is bullshit is 2 fold. 1 it is an ad for phantom gear. 2. The test shows the epap putting more photons on canopy than the gavita which simply isn't true.

Unless I'm misunderstanding the attached, it seems to bear out that the ePap puts out more photons than the Gavita.
 

Attachments

  • pub__8264567.pdf
    1,000.4 KB · Views: 65

HUGE

Active member
Veteran
Yes, I have read that. And then I went and bought 10 gavitas , 10 epaps and 18 LEC's then did some testing with a licor meter. I'm no scientist but I can measure and I used a good meter. The gavita crushed the epap one a 5'×21' test bed section lined with orca points measured at 1' grids 3' from bulb.
 

timmur

Well-known member
Veteran
Thanks for the feedback Huge.

Integrating sphere measurements indicate that the ePap has slightly higher output. From Wikipedia,
The total power (flux) of a light source can be measured without inaccuracy caused by the directional characteristics of the source, or the measurement device.
Maybe it has something to do with your test procedure? Very interesting results.
 
Last edited:

HUGE

Active member
Veteran
Yes it may very well have higher output in the sphere. It comes down to reflector design. Essentially the epap throws a good portion of its light at a straight 90 that basically only lights the walls while the gavita puts all light on b the canopy to be measured. That was the conclusion I came to from my testing.
 

Avenger

Well-known member
Veteran
let us see your data points then, other wise you are just blowin' smoke
 
Last edited:

Bob-Zilla

Member
The way you can tell this test is bullshit is 2 fold. 1 it is an ad for phantom gear. 2. The test shows the epap putting more photons on canopy than the gavita which simply isn't true.

Just read up on this and in my opinion it is a professionally brilliant(pun intended) marketing advertisement.

The flaw here lies in that not one constant lamp/bulb was used across the different brand ballast/reflectors. Top winners all have 400v 2100 umol PLUS bulb.
The rest, including Gavita, are not tested with that same class of bulb.
 

Avenger

Well-known member
Veteran
I contacted Hydrofarm, the bulb that they tested the Gavita fixture with was infact the "PLUS" lamp, 2100 umol. Its just a textual omission on the analysis results sheet.

They used the lamp that came with each luminaire kit.
 

KingP

New member
In the ITL report I see a lot of data, but this doesn't say a thing.

A few things in this test were definately done wrong.

First of all, the photogoniometer only measures lumens or to be more exact candela. Lumens and micromols are not the same. A higher lamp temperature increases the lumens output, but not so much the micromol output.
These measurements really is comparing pears to apples.

Next to that relative measurements should all have been done with the exact same lightsource/lamp. It shows basically how much light of a know lightsource is reflected by the reflector. This is called reflector efficiency.
Using different lamps makes the measurement useless.
Lamps that are brand new or used for 5000 hours will give less light than lamps that are used for only 1000 hours.
Even if all lamps in the test were brand new this doesn't say a thing about the output during time.
I.e. the Agrosun might give a lot of light in the first 500 hours and go down to 80% after that.
The Philips lamps are known to give a relative low output in the first few hundred hours but have the highest output after roughly 1000 hours and stays constant.

As a basis in this measurement all lamps are set on 165000 lumen (yes..., lumen..), but the measurement shows that each fixture has a different lamp that is used. Based on that fact, Parsource claims a reflector efficiency of 93.5% (in lumen).
There is no lamp with an exact output of 165.000 lumen. Do you have a testreport that shows that the lamp you used actually had an output of 165.000 lumen and that that lamp was used in this particular Parsource fixture in this measurement? So how come you state you have 93,5% efficiency?
If the lamp had an output naked without reflector of 180.000 lumen, the efficiency in this measurement would have been 85,6%.
If the lamp in i.e. the Gavita had an output naked without reflector of 140.000 lumen, the efficiency in this measurement would have been 99,0%.
This the reason why calibrated lamps are used for measuring reflector efficiency. And the same lamp for each measurement.

The Ushio or Philips (even the Gavita and ePapillon bulbs) all have an average output around 2100 micromol/s.
In the second graph of the total output umol/s there's something weird.
Graph 1 shows that the Parsource has an efficiency of 93,5%. The output in graph 2 is 1842.5 micromol/s. This means that the lamp gave 1842/0,935 = 1970 umol/s.
Some others: Gavita 84%. Output is 1614 micromol/s. So the lamp gave 1614/0.84 = 1921 umol/s.
ePap: 88.4%, output is 1669. The lamp gave 1890 umol/s.
Why doesn't this add up to 2100 micromol?

Now the question is how this second graph was made up. How do the results in this graph add up?
The reflector efficiency measurements were all done by setting the lampoutput of 165.000 lumen. But to calculate the usable light different settings of the lamp are used.
If the Parsource has a reflector efficiency of 93,5% and the Agrosun is promised to be 2100 micromol/s the total output of the fixture should be 1963 micromol/s but the measurements show only 1842 micromol/s.

Is Hydrofarm showing with these testresults that their reflector is just not as good as they claim it should be? Is the Agrosun lamp just really bad? Or did they just show they have no clue.

Hydrofarm should have done their homework before posting useless measurement results.
 
It looks like gavita is coming out with new reflectors that look more like the e pappilions in February according to growers house. They also are coming out with new small room reflectors.
 

tduby

New member
Yeah I think gravita realized for a lot of growers a lower wider beam has advantages in some situations. It will be interesting how these reflectors perform.
 

rykus

Member
Really thinking of going with a few Epap as a test and maybe switching over to DE fixtures over the year...

Anyone have any good links or growth reads documenting the Epap, or with experience with them? I would love some input!

Cheers
 

whazzup

Member
Veteran
I haven't gone over the full thread yet but I would like to clear up a few inaccuracies.

1. The Gavita reflector was designed to create uniform light. If you have ever seen a lighting plan made by Gavita then you will see that our uniformity is >90% in the (complete) room, and more important, no peaks of more than 8-10% above average light levels, which means no hotspots.

2. It is a pity that some of our competitors make light plans where they only take a small center grid to show their uniformity. That might be a method you use in greenhouse calculation (actually it is, it is impossible for a program to calculate the uniformity of a 120,000 square meter surface), but that is not the right method for calculating uniformity in a growroom.

3. In the comparisons between led and HPS the argument that you spill a lot of light in the paths is not valid. You should not have any paths in the first place and use rolling tables. We light rooms, not beds. No modern greenhouse has paths anymore. The advantage of the wider, overlapping throw of the HPS lamps is a much better horizontal penetration of the crop. LEDs mostly have a much smaller beam angle, which actually require you to take more distance to the fixture, but it does prevent light losses to walls in small rooms. However, in a larger room that advantage is no longer there, plus HP still gives you a better uniformity and horizontal penetration.

4. Gavita installs hundreds of thousand of fixtures in professional greenhouses every year. We have to guarantee light levels there. If we would be under-performing 20%, like some trials or tests suggest, we would have been in big trouble for over 30 years. And it might not be a surprise to you that always the company that has the trials done always comes out as best.

if you go to the Gavita youtube channel you can see a video how we measure lamp and fixture output. There will be an update of that video soon as we use a special protocol which is developed together with Philips, to measure lamp and fixture output. We are waiting for Philips to publish it first, then we will explain about it in an upcoming video from our studio, with the engineers present explaining it. This is not only an issue in this industry, but also in the horticultural industry. Lamp labs work according to standards based on lumens and can not make any other certified reports other than that. This has huge consequences when measuring HPS lamps in reflectors: An HPS lamp in a reflector heats up. this causes a slight shift in spectrum, which causes a drop in the selective photopic spectrum (according to the sensitivity of the eye). That could lead to a drop in lumens output of up to 10%. However, the micromol output does not change that much at all, a few tenth of a percent maybe. As all light labs do light tests which are 99.9% aimed at lumens output (as they are meant for us to see things according to the sensitivity of the human eye), they are not equipped to perform the horticultural tests accurately. Let alone if they do not measure base lamp output but get that specified by the testing company...

Anyways, as always, educate ourself ;)

peace
 

Latest posts

Latest posts

Top