Hopefully the first phase of the competition served to familiarize you with CRTS. The second phase of the competition is meant to put more emphasis on MAC layer functionality. Specifically, the goal is to achieve high throughput while minimizing the impact of your transmissions on a primary user (PU) network. As such, the scoring will be modified to include a multiplicative factor based on the degradation experienced by the PU network.
Score = (Sum of Sharc node throughputs) * (PU throughput with active Sharc nodes)/(Average PU throughput with no active Sharc nodes) We will be applying the same statistical analysis as in the first round to avoid ranking teams based on statistically insignificant results.
As you begin working on your phase two submissions there are a few features we've added to CRTS which you may find useful:
•Direct control over the number of frames transmitted through the addition of the ECR->start_tx_for_frames(int) function.
•Simultaneous spectrum sensing and frame reception is now possible. To enable/disable this feature use the ECR->set_ce_sensing(bool) function. When true, the ECR will pass all received samples to the CE as a USRP_RX_SAMPS event. See the CE_Simultaneous_RX_and_Sensing cognitive engine for an example of how this works.
•You can now call ECR->get_tx_data_rate() to get an estimate of the physical layer throughput you should achieve in your current transmit configuration. From what I've seen it is fairly accurate. Note that it does not account for packets received in error or missed which will of course reduce your throughput. Also, this number will be off from your measured network throughput due to overhead from the IP header, see CE_Throughput_Test.cpp. In addition we've made some minor changes which we've listed here to hopefully avoid causing frustration on your end.
•The '.cfg' has been dropped when naming scenarios in master_scenario_file.cfg
•The argument passed to the CE execute function was changed to an ExtensibleCognitiveRadio* rather than using a void* which is later typecast.
•We've added some scenario configuration parameters related to the network traffic generation for each node. Take a look in the newly added Scenario_Template.cfg file for details.
Phase 2 begins February 8 and is due March 11, 2016
Hello Spectrum-ShaRC challengeants!
Here are some additional details for the first round of competition.
This competition will be using the Cognitive Radio Test System, a framework for test and evaluation of cognitive radio (CR) networks. The focus of this round of the competition is on cognitive engines (CE). Each team will design one or two CEs based on the Extensible Cognitive Radio (ECR).
Teams are not responsible for developing an entire CR from the ground up e.g. using software-defined radio (SDR) software such as GNU Radio. *
The ECR is a particular CR that we have developed to enable flexible development of CEs. Please see the documentation for CRTS and the ECR available in the git repo https://github.com/ericps1/crts/releases/tag/v1.0.
Score will be based on the sum data transmitted between the two CRs in a set period of time in a set of interference scenarios. These scenarios will be designed to vary in terms of perceived difficulty/complexity. In general, a scenario will comprise the two CR nodes and one or more nearby interferers which will be transmitting dynamically in terms of time and frequency. Teams will have no control over or exact knowledge of the particular scenarios being used for scoring. Final scoring will be based on statistical results i.e. scenarios will be run multiple times using multiple combinations of nodes. Each team will be evaluated in an identical way.
Submissions will be in the form of one or two .cpp files defining CEs for the ECR. No other alterations to CRTS or the ECR will be made for a particular team.
All testing will be performed on the CORNET testbed.
Because of the number of teams entered in the challenge, access to CORNET will follow a strict schedule which will be provided to teams individually.
CORNET accounts and schedule information will be provided upon receipt of each team’s signed publication permission form (PDF attached).
Submissions are due November 15. Late submissions will not be accepted. Code that will not compile will be given a score of zero. Code that crashes during testing will be scored based on the data successfully received prior to the crash.
Early submissions will be given preliminary evaluations in the order they are received to make sure the code compiles and runs. Time permitting, feedback will be provided to teams so they can fix any issues in their code.
Cheating will result in disqualification. Cheating includes but is not necessarily limited to: sending data between nodes other than over-the-air via the USRP, using unauthorized frequencies for transmission, or using the network synchronized clocks on the nodes to synchronize control of the CRs.
Invitation to the fourth and final round of the challenge, to be held in Blacksburg, VA during June 1 - 3, 2016, will be determined based on rankings in the first three rounds, for example, a team’s rankings in the three rounds are added together so that a team that places first in round 1, second in round 2, and third in round 3 would have a cumulative score of 6 and would be invited after a team that earned a cumulative score of 5 by scoring second, first, and second in the three rounds. We currently plan to invite up to two representatives from each of the top four qualifying teams, and can provide limited travel funds for each of these representatives. It is possible that the number of invited teams could be increased depending on funding available at the time the invitations are issued.
Participation in the challenge has been limited to 17 teams, who registered before or shortly after the October 15 deadline. In the first round only, late entry into the challenge will add one to the ranking used in determining the invitations. For example, for a team that enters the challenge late and places second in the first round, a rank of “3” would be used in computing its score.
Carl Dietrich, on behalf of the Spectrum-ShaRC organizing team