Tuesday, August 25, 2020

Contemplating Oscar Wilde essays

Examining Oscar Wilde articles Oscar Wilde is a fascinating creator and a captivating individual with his wild propensities and over the top artistic work. One such piece is a perfect spouse. The characters of this play approach the privileged lives so that it is both comedic and valid. Sir Robert Chiltern is the image of the perfect spouse: fair to say the least, kind, courteous, rich, a significant activity at the House of Commons, and substantially more. Ruler Goring is a man who each young lady needs to wed at this point he doesn't appear to be keen on them. Before the finish of the play anyway he proposes to Mabel Chiltern and she acknowledges as he will be a perfect spouse. There is additionally the mayhem of individuals undermining shakedown at each other. These two plots are expertly entwined to make a comic play that likewise has intriguing messages with in its carefree tones, for example, the discussions about the training of lady. Additionally it has other positive looks on lady, for example, Lady Chiltern, she has a positive and giving relationship with her better half. She has built up a working relationship much like one that we would think about a marriage today were she does fundamentally what she needs. There are as yet the ideas of an opportunity to consider when pondering her freedom, so it isn't great. It is better then the vast majority of what you consider of that timeframe. I have likewise observed the film made in 1999. Taking a gander at just the outfits it was magnificent. The ensemble executive exceeded themselves and made some superb gorgeous sight for the watcher. It was acceptable to see the period garments in a real to life piece. To see the way that the textures moved and the distinctive brilliant material that can be utilized. I don't know how evident they ensemble chief was to the period yet it felt extremely bona fide. It is essential to see garments from various sources so you can get a solid vibe for them. When perusing a perfect spouse I figured out the timeframe and how individuals acted during it. There were seve... <!

Saturday, August 22, 2020

Nutrition †Food Essay

Theory Statement The majority of the understudies have undesirable food inclinations. Theme Outline I.Influence of Media A. Calorie-thick nourishments are amazingly compensating to expend. B.Causes more prominent nibble food utilization. II.Income of a person A.Eating solid is costly. B.Low pay individual eat and purchase less expensive nourishments. III.Convenience A.Fast and simple access 1.Delivery administrations 2.Ready to eat nourishments a. solidified nourishments 1.)TV suppers 2.)shelf-stable items 3.)prepared blends B.Saves time Food, so as to fill its need, should initially be expended. Under typical conditions, food is devoured just on the off chance that it is acceptable enough for the customer. It in this manner turns into an extraordinary duty of one who gets ready and serves food to make the food acceptable other than being nutritious and safe. Food propensities don't create in a vacuum. Like different types of human conduct, they are the aftereffect of numerous individual, social, social, and mental impacts (Williams, 1974). These days, young people pick food they like without considering about it contains. As a result of occupied life, they pick inexpensive food, bite which is non-nutritive over nutritive-rich ones like rice, meat, and couldn't care less in the event that it abbreviates their life, harm their wellbeing or cause numerous sicknesses. There are various variables that influence the food propensities for every individual inside a culture. A portion of this is the cultural factor and the way of life factor. In cultural factor, the food creation and appropriation framework are answerable for the accessibility of nourishments which varies from district to locale and nation to nation. Food accessibility impacted by the financial and political frameworks. Then again, in way of life, both accessibility and control of food at the cultural level influence the way of life variables of people. These variables incorporates pay, occupation, spot of home, provincial contrasts, strict convictions, wellbeing convictions, physiological attributes, pubescence, sexual orientation, person’s condition of wellbeing and in conclusion, the family unit structure and piece (Kittler and Sucher, 2004).

Tuesday, August 4, 2020

Video Games in Education

Video Games in Education Video Games in Education Home›Informative Posts›Video Games in Education Informative Posts  Technology and Education: Online Video GamingTraditionally, video games were considered to be detrimental and have negative impacts. Anyone opposing the use of video game in the classroom would be justified to say that the main aim of sending a child to school is to learn (Fromme Unger, 2012). In addition, video games consumes considerable amount of time that would otherwise be used in revising class notes and curricula. However, a fundamental question that should be asked here is, should video games be banned if it has been established that they enhance the learning outcome of a child? The truth of the matter is they should not be. While it is true that playing video games results in some negative impacts such as addiction, psychosocial and medical effects and aggressiveness, a wide range of literature dating back to the 1980s has continued to show that video games have several benefits especially when integrated into education (Annetta, 2008). Some of the proposed benefits of video games include: reduction in the reaction times among students, improved hand-eye coordination and increased self-esteem (Martin, 2012). In addition, fun, curiosity and the level of challenge in a video game adds to the potential of positive outcomes of playing the games. The current paper will evaluate whether video games enhance learning outcomes in the classroom.Educationists and policy makers have continuously acknowledged the potential of integrating video games in the classroom, especially after the success of programs such as the Khan Academy. However, people especially many parents, continued to be of the opinion that playing video games is always detrimental. Nevertheless, video games have significantly enhanced the classroom outcome especially for K-12 students (Annetta, 2008).What Video Games Can Teach UsVideo games, regardless of their nature, have become an established prac tice since 1980s (Annetta, 2008). Although video games may have been of low quality, educators had discovered the essence of videogames.  Since then, a lot has changed especially because of technological development, but one thing has remained constant all along; the best educational video games have not been just an educational tool, but a way of showing students that learning can be fun (Hall, Quinn Gollnick, 2014). It is thus important that video games be made part of educating students.It is essential to assess the extent to which video games can be of help to students. Since several scholars have confirmed that video games can contribute to enhanced learning outcome, there has been a rise of what has been termed as “edu-tainment” media (Griffiths, 2010). Observing students play some video games while in classroom is evident enough to show that they are essential in promoting learning outcome. It is thus a wake-up call for video games developers to come up with more games that are education oriented (Hall, Quinn Gollnick, 2014). There is enough evidence suggesting that important skills are developed and reinforced by video games. For instance, spatial visualization because of manipulating or rotating 3D objects, are enhanced by video games (Fromme Unger, 2012). It has also been established that video games have successfully worked for children who began with poor skills. Equalization of student’s differences in spatial skills performance has also been indicated to be improved by video games.In a report prepared by Business Roundtable in 2005, a pertinent question was raised. The report expressed deep distress regarding America’s capability to sustain the continuing trend of technological and scientific superiority (Hall, Quinn Gollnick, 2014). The report went further and recommended that immediate action be taken by the government and all concerned stakeholders to secure a prosperous future for children.   Coming up with technologies that will be in a position to reach what the report termed as “net generation (Annetta, 2008). It is important that educators start viewing video games differently from a traditional society where students were discouraged from engaging in gaming activities. An initiative called “serious games” was started in an attempt to embrace video games as a teaching and training component in education (Griffiths, 2010). The movement aimed at coming up with video games that have the potential to meet the needs of “net generation”.   With increased network connectivity and gadgets, video games are the most current ways that the internet has changed how a generation of young students socialize and view entertainment (Squire, 2006). The notion that video games are now becoming popular has gained the attention of The Federation of American Scientists FAS (Griffiths, 2010). According to the federation, video game is the next great discovery because of their ability to captivate learners to the ex tent of spending quality learning time on their own. However, FAS noted that most of the commercial games are not educational oriented hence there is the need for the government to invent or financially support games developers to come up with educational games (Griffiths, 2010).One of the video games that can be described as being education and successful is the Immune Attack developed by FAS (Griffiths, 2010). It was developed thanks to the initiative between the University of Southern California and Brown University. The game was developed as an attempt to educate students on the widely held difficult topic of immunology. In the game, a student must teach his or her immune how to function properly or die. The human body serves as the playing field and immune cells faceoff against viral and bacterial infections. Another successfully used video game aiming to educate students is the Food Force that was developed by the WFP in 2005 (Griffiths, 2010). The player must engage in food d istributing missions in a famine affected nation so that the nation becomes healthy and self-sufficient again. In the game, a player can become an expert in nutrition, director of food programs, an appeals officer or a pilot.From the above examples it is evident that educational games require a player to have good logic and sharp memory, be able to make decisions or become a problem solver, use critical thinking skills, discover and have good visualization (Squire, 2006). Therefore, incorporating video games in the present day classroom will enhance a student’s ability not just in the classroom, but also in the society.Video games and learningChange is inevitable, and advances in technology are shifting learning from the traditional library to mobile devices (Griffiths, 2010). It is thus important for stakeholders to cultivate the use of video games in several ways. For example, in enhancing language skills, game developers have come up with video games that involve discussion and sharing, following direction from the game, giving directions and answering questions (Hall, Quinn Gollnick, 2014). In addition, the presence of internet has facilitated online gaming where students can interact with other students from all over the world and discuss some topic with the help of visual aids. Video games also promote basic Math among young students in that a player must interact with the score counters. Modern video games have facilitated basic reading skills as well. Video games with character dialog printed on the screen have facilitated basic reading since a player must be able to read instructions such as load, change, go, quit or play (Fromme Unger, 2012). Traditional learning experience for students involved sharing among themselves and the teacher. However, video games might help in revolutionizing classroom interaction especially in the internet. As a result, the student’s social skills are enhanced.It is important to note that video games are not only go od for “fully functioning” or “normal” students, but they can also be applied to children or learners with multiple disabilities or handicapped such as those with limited vocal speech acquisition (Hall, Quinn Gollnick, 2014). Other researchers have successfully used video games to facilitate learning among disabled children in their spatial abilities, mathematical abilities and problem-solving skills. Some researchers have suggested how technology can be utilized in present day learning to enhance learning outcome. Anthropologists and sociologists have also strongly supported the idea of playing. They have suggested that play is part of human activity and thus video games playing means those students will have a better chance of conceptualizing things they theoretically learnt in class (Fromme Unger, 2012).In conclusion, debate on whether to introduce video game in the classroom for learning purpose may continue for several years because research on this field may be scarc e. However, it is evident from several successful video games that indeed incorporating technology in the classroom should not only involve doing away with the books and embracing online learning, but also introduce more exciting technology to make learning easier and more fun. It may take time before this concept is widely replicated in the entire education system, but where it has been implemented, learning outcome has been enhanced. The best thing that the government and all stakeholders in the education should do is embrace a multi-sector approach to video games.   Sociologists, anthropologists, behaviorist and game or content developers should come together and design games that will have maximum impact on the learning experience for students. Parents remain vulnerable to any information concerning video games and learning. However, it is time that the government reaffirms its position in the current classroom and at the same time prepares the future classroom where anticipatio n about video game remains high. The commercial market has a lot to offer, it is thus important that teachers, facilitators, and parents evaluate the games they recommend for their children or pupils. Not all video games promote the health or education and because of technological advancement-game upgrades, it has constantly become challenging in evaluating the educational impact across several studies.ReferencesAnnetta, L. A. (2008). Video games in education: Why they should be used and how they are being used.  Theory into Practice,  47,  3, 229-239.Fromme, J., Unger, A. (2012).  Computer games and new media cultures: A handbook of digital games studies. Dordrecht: Springer.Griffiths, M. (2010). Online video gaming: What should educational psychologists know?.  Educational Psychology in Practice,  26,  1, 35-40.Hall, G. E., Quinn, L. F., Gollnick, D. M. (2014).  Introduction to teaching: Making a difference in student learning. London : SAGE PublicationsMartin, J. (2012). Game o n: The challenges and benefits of video games.  Bulletin of Science, Technology Society,  32,  5, 343-344.Squire, K. (2006). From content to context: Videogames as designed experience.  Educational Researcher,  35,  8, 19-29.

Saturday, May 23, 2020

Market Trends and Changes in Dell Computer - Free Essay Example

Sample details Pages: 5 Words: 1561 Downloads: 4 Date added: 2017/09/13 Category Advertising Essay Did you like this example? Market Trends and Changes in Dell Computers Kim Jones University of Phoenix ECO/365 Dr. Dominic F. Minadeo September 10, 2009 Market Trends and Changes in Dell Computer This paper will describe market trends that Dell Computer may face in the near future. Possible changes will be identified within the following areas; market structure, technology, government regulation, production, cost structure, price elasticity of demand, competitors, supply and demand. This paper will also touch on the impact that new companies may have on Dell. As plans are made for Dell, a monopolistic competition seems to remain the most practical marketing structure for them. Even with adding new products to their PC trend, Dell is still competing against other companies with similar products such as laptops, desktops, game systems and many other electronics allowing for the market structure to remain constant. Dell is starting to have a new take on technology as they work on future designs and thoughts. Don’t waste time! Our writers will create an original "Market Trends and Changes in Dell Computer" essay for you Create order While traveling overseas, Michael Dell stated that he and his staff are exploring smaller-screen devices. â€Å"Speculation is, Dell is planning a smart-phone that would compete with Research in Motion’s (RIMM) BlackBerry, Apple’s (AAPL) iPhone, and the various devices running software from Microsoft (MSFT), Nokia (NOK), or the Google (GOOG)-backed Open Handset Alliance† (Kharif, Olga, 2009). Another possibility that Dell is considering, if the smart-phone does not succeed, would be a mobile internet device, or MID. This device is larger than a smart-phone but lighter than the smallest notebook computer known as the netbooks. According to recent surveys of consumers, most would be in agreement to replace the smart-phone idea with the MID. In addition, after research was conducted on other cell-phone makers, a struggle is found to be going on in the smart-phone market regarding whether or not the smart-phone is even a successful possibility. Going green is a technology that has been going on for some time and will remain one of Michael Dell’s strongest passions. Dell is looking at introducing several new greener services in the near future that will help to assess complexity and simplify IT environments as it will keep Dell in compliance with the government environmental regulations. The first will be the â€Å"greenprint† which will help organizations identify inefficient process and develop ways of fixing them. â€Å"Dell is simplifying its client and data center infrastructure, and is also offering services that let organizations assess complexity in their IT environments† (McLaughlin, Kevin, 2007). Dell is also planning to sell PowerEdge servers with Solaris installed direct to customers. This device will be 23 percent more energy efficient than similar offerings from competitors. According to Dell, another service called Image Direct will let customers develop one’s own custom PC images and upload them to Dell to be installed on the machines they buy. Dell is hoping that these new services will not only create stronger customers relations but also will stay strong in helping to save the environment. Another future technology that Dell is introducing is the XFR E6400 Latitude. This laptop is geared toward military and construction uses, meets military specifications and can withstand being drenched with a fire hose. According to Dell spokesperson Patrick Burns, this laptop can endure the harshest of environments such as first responders, field service technicians and those who require systems that meet 13 military specifications including drop tests, sea fog, temperature extremes, thermal shock, explosive environments and many more. This new technology will be funded through a government stimulus package while targeting the military as they are now fighting wars. The XFR E6400 Latitude can be used by the military to assist in the live updates of satellite maps, enabling satellite-based telecommunications and troops using the laptop to chart incoming missile and artillery fire† (Miguel, Renay San, 2009). Dell is looking into new market trends such as price slashing to keep on top of the market. According to CBC News, Dell computer’s second-quar ter profit was whacked 23 percent as the personal-computer industry’s slump dragged on this summer. Revenue in the PC category came down nine percent to $2. 9 billion lthough the shipments of consumer PCs increased 17 percent over last year. To preserve market share, the PC makers have been slashing prices. This is yet another example of Dell’s price elasticity of demand. Even though the revenue dropped, Dell was still willing to be elastic with their prices in order to keep cliental motivated and interested in Dell products. In 2006, the Wall Street Journal attacked Dell Computers because of a bad quarter. There was a decline of 51 percent in earnings from the prior year and a 60 percent decline in stock prices from the year 2000. The journal was making accusations that Dell was falling behind other competitors worldwide. Michael Dell has an upbeat not so negative response, â€Å"Ten years ago, this was a $5 billion business; now it’s a $56-to-$57 billion business. We still hold the number-one market share in the small computer system industry† ([emailprotected], 2007). Dell is growing more rapid internationally such as 37 percent in China, 82 percent in India, 87 percent in Brazil and 78 percent in Mexico. Even in Japan, where there was little hope for success, Dell is the number one leading desktop market and number two overall in the Japanese market and building into the future. Dell also had many of their top executives snatched out from under them by a competitor just for Dell to come out with 60 percent of the profits of the industry and the competitor 40 percent. The company remains and continues to grow more profitable than the next three competitors combined and is also the largest provider of hardware maintenance services for computers in the United States. The competitor has tried to replicate Dell’s idea of the supply chain without achieving near the improvements that Dell has in their system. Not to say that the competitors have not achieved some level of improvement but if one was to look at the basic metrics, like return on capital or inventory management, they have not approached anywhere near the level that Dell does with its supply chain. To the competitor, the supply chain is about innovation and profits, but to Dell the chain is more about customer relationships and service with the customers. A few comparisons of customer preference through surveys showed that Dell was preferred above all other computer providers. In notebooks two competitors combined carried a preference of 12 percent, while Dell was preferred by 60 percent. Within large institutions and corporations, Dell was preferred by 56 percent in desktops and 53 percent in notebooks. The other two competitors combined carried only a 36 percent preference. These numbers are showing a strong preference towards Dell and a strong likelihood of repurchase in the future compared to the competitors. Since 2007, Dell has started opening up a new customer feedback website in order to better supply customers with what they are demanding; hence supply and demand. Dell is discovering that customers want pre-installed Open Office and pre-installed Linux green, energy efficient computers. â€Å"Customers are also demanding that Dell supply the One Laptop Per Child (OLPC) program in North America to help literacy programs and under-privileged kids by offering $100 laptops with subsidies and easy payment options like $9 a month for a year or $1 a month for 100 months† (Arrowsmith, Robert, 2007). As a need is demanded worldwide, Dell is looking for near future ways of producing more â€Å"green† computers and simultaneously supplying children all over the world with affordable classroom computers. The one negative impact that new companies entering the market could have on Dell would be that of printers. Dell has depended on the sales of Lexmark, Hewlett-Packard, Epson and Canon printers even though they do not benefit from the ink refill business. With new comers entering the market, Dell may have to create a printer design of their own to be able to stay on top of the market. Otherwise, Dell does not seem to be easily intimidated by new competition as they remain on top of excellent customer satisfaction and relationships. Going into a bight future of continued success, Dell is still striving to remain amongst the top PC providers as they explore new possibilities for the company. Dell is looking into new technologies such as the smart-phone, the mobile internet device and many new ways of keeping all products environmentally friendly. Dell’s prices remain elastic with the economy and the needs of the customer’s in order to supply the best product demanded at the most competitive price. Most believe that Dell will continue to prosper as they remain both customer friendly and environmentally friendly for many years to come. References Arrowsmith, Robert, February 22, 2007. One Laptop Per Child News. Might OLPC Inspire Dell to Open Source Laptops in USA? CBC News, August 31, 2009. Dell Profit Sinks 23 Percent in Slumping PC Market. https://www. industry. bnet. com/technology/news-analysis/dell-profit-sinks-23- slumping/9588 Kharif, Olga, March 25,2009. BusinessWeek-Telecommunications. A Dell Smartphone Would Face Big Hurdles. https://www. businessweek. com/technology/content/mar2009/tc20090324_741292. Htm [emailprotected], September 6, 2006. Michael Dell: Still Betting on the Future of Online Commerce and Supply Chain Efficiencies. https://knowledge. wharton. upenn. edu/article. cfm? articleid=1543 McLaughlin, Kevin, November 14, 2007. ChannelWeb. Michael Dell: Going Green is Key to Industry’s Future. https://www. crn. com/networking/203100337;jsessionid=F5WIS1RFNAC3DQE1G HPSKH4ATMY32JVN Miguel, Renay San, March 10,2009. TechNewsWorld. Personal Computers. Dell Rolls Out Laptop for the Hard-Hat Set. https://www. technewsworld. com/rsstory/66447. html? w/c=1252274516.

Monday, May 11, 2020

Video transmission in wireless mesh networks - Free Essay Example

Sample details Pages: 32 Words: 9596 Downloads: 3 Date added: 2017/06/26 Category Statistics Essay Did you like this example? 1. Introduction: Video transmission in Wireless Mesh Networks Recently there has been research interest in supporting Wireless Mesh Networks (WMN). Wireless Mesh Networks provides cheap and efficient network connectivity in a large region. There are number of significant advantages by using multipath for video communications, such as load balancing, potentially higher video bit rate and improved error resilience (Vishnu Navda). Video applications development has been impulsive. For provide round-the-clock surveillance of critical areas of their communities municipalities now organize video applications. During the real-time events video steam in wireless mesh networks monitor corridors and traffic grids. Business users depend on video conferencing to enhance the productivity and reduce travel. In large numbers consumers are accessing on-demand video sites, generating millions of exclusive video streams every day. Don’t waste time! Our writers will create an original "Video transmission in wireless mesh networks" essay for you Create order For getting the real time access the service providers are looking for ways to exploit the customer supplies. In this the service providers acquire real time access to delivering differentiate streaming and entertainment and on-demand video services (BelAir Networks). Video over wireless mesh networks is in turn dynamic new applications, including mounting video cameras on buses, trains, police cruisers, ships, police cruisers, and ambulances. Due to these capabilities of video surveillance further the safety personnel and traffic responds make better and faster is allowed and the decisions are more informed (BelAir Networks). 1.1 The ideal video network: BelAir Networks wireless mesh architecture provide carrier-grade performance for creating consistent, cost-effective video networks that scale to support municipalities, including local businesses, neighborhoods, school campuses and traffic corridors. The BelAir wireless network provides Quality of Service (QoS), and it is ideal to support bandwidth intensive applications such as multimedia and video, including the industrys lowest latency rate, with minimal jitter. By using backhaul to transport the high bandwidth video substances the wireless mesh networks eliminate the usage of long cables. For these video substances the BelAir wireless mesh networks supply mobile access. The police cruisers and responders can share the real time coverage of incidents by using the mobile access it is supplied from wireless mesh networks. BelAir Networks wireless mesh products can support video applications as well as other Public works, Public safety, and Public Access networks, as shown in Figure 1. For civil employees crystal clear voice service allowed the cities to accumulate on cellular costs by establishing their personal Voice over Internet Protocol (VOIP) network. Community groups, trades people, schools, visitors, taxpayers, and even remote municipal employees are easily access to government information resources and internet based information from anywhere in the world can be accessed by a Public Access Network. Capabilities enhance the value of the network and make a group of financial intelligence (BelAir Networks). 1.2 Ganges Wireless Mesh Network: For monitoring this Ganges network we would follow the particular schematic diagram which as shown below 2. PROBLEM FORMULATION in Ganges Architecture: 2.1 Routing Problem: In this routing streaming videos have high bandwidth requirements. The routing problem is to resolve the path s between the CAN and each video source. By using the available bandwidth effectively we can obtain a good throughput. In this all flows are finish at the CAN. And the CAN is the root and sources as intermediate sources of the tree. The capacity of the channel is limited by total bytes that can be received by CAN in unit time. It is called as upper bound for the sum of the throughput of all flows. But the actual throughput is commonly much les than the sum of the throughput. The reason for this is all nodes are operating at the same frequency band. For accessing the channel the nodes are within each others sensing range. The intra-flow contention is occurred by carrying the same set of flows run with each other in a multi-hop path network. Intra-flow limits the total throughput along a multihop path in a network. In contradictory, the capacity is shared between the flows and the throughput for each flow is reduced in the case of when one or more flows are combined together. For a single channel mesh network it is difficult to reduce the intra-flow contention, and also reduce the inter-flow contention in spatial routes for various flows and also improves the throughput for each and every flow (Vishnu Navda). 2.2 Loss of Packet and Delay-Jitter Problem: Over multi-hop wireless networks, we can see two different types of losses in a packet that occur during in real time where we send the video through transmission. Firstly, due to these channel errors packets may be received and corrupted. To improve the reliability 802.11 MAC uses retransmissions. Secondly, every packet wi ll have its own deadline before it reaches to the destination and plays the video online in wireless network. Packets those who are arrived late are considered as discarded or lost which causes to decrease the online video session. This will be attractive to loss and reduce the quality. In this case delay experiences flow of packet which can cause large variations in the network congestion and packet losses. A playback buffer is used to reduce the jitter. And that playback buffer adds some delay between the playback time and actual streaming. Before being played back the packets received in time are buffered. Moreover the lower buffer size requirement implies the lower delay. The playback buffer should not be empty for video to be played back without interruption. 3. System Design in Ganges Architecture: In this system design, we explain the solution in which we can determine the maximum rate by which the video flows and the construction of routing tree. Following we will present several adjustments at the routers for reducing the flow of packet delay jitter and handling the packet losses can be improved by using the quality of the video streams. 3.1 Aggregation Tree Construction and Rate-based Flow Control: A simple grid network is shown in fig 3(a) for a multi-stream aggregation in order to analyze the impact of the available bandwidth of each flow. And the fig 3(b) depicts two different sets of routes for each flow from root node to source node. With RTS-CTS enabled, all links compete with each other in both these tree instances. Example to depict that the aggregate throughput of all contending flows is same no matter where they merge. (a) Connectivity Graph. Two instances of aggregation tree: (b) and (c). For the first case (Figure 3 (b)), there are 6 competing transmitters and each get equal share of the channel capacity (i.e. C=6). So the highest achievable aggregate throughput at the root is 3 * C/6 = C/2. For the second case(Figure 3 (c)) where the three flows combine before reaching the root, there are only 4 transmitter and thus every node now gets 1/4th share of the channel bandwidth. Thus the aggregate throughput at the root is C=4. But if the sources are limited to send at a rate C=6, then the intermediate relay node gets to use the remaining channel time i.e. C=2 and the aggregate throughput is same as in case 1. Thus with flow control, merging two or more contending flows does not impact the aggregate throughput. Four flows aggregated along disjoint competing paths. (b) Missing some flows are as close to the source which is not possible to increase the path length for each flow increasing per throughput. In this another advantage is, some sort of edges is utilized for video transmission in the case of two or more flows are merged together. For other flows, it will increase the chances of finding specially disjoints the path. Although they will be ultimately interfere at the root. The total bandwidth at the root is higher therefore per flow bandwidth is high. 3.2 Distributed Spatial Aggregation Tree Construction: For each flow sequentially assigns the best routes in an algorithm called as Greedy algorithm. In this there is a source node denoted by v, for that each source node v, the goal is to determine a path to the roots that occurs contention only at the last few hops. This is known as Spatial-Path search. All these paths have length in L+1 hops, where L is the hope distance of node v. If this constraint is not satisfied for these paths, algorithm switches to the Compact-Path search. Flow value f (u) is the number of flows carried by node u and also defines the Blocking value b (u) for a node u as the number of contending transmitters within the one hop neighborhood of u. Now consider the search for a route from a root s to the source node v. With the help of Spatial-Path route search will be started. This considers a sub graph consisting of nodes belonging to the set {s à Ã¢â‚¬ ¦ N(s) à Ã¢â‚¬ ¦ R} Where N(s) is the set of one hop neighbors of s, and R is the set of all nodes that have f (u) = 0 and b (u) = 0. The cost of all edges is 1. When the length of the shortest path for a spatial path flow is successful if there exists one, is at most h (v) + 1. 3.3 Rate-based Flow Control Algorithm: After establishing the routes, we have to check whether the aggregation tree can support the highest per flow bit-rate to efficiently share the network resources across the all flows. Rate control helps prevents some of the flows from forcefully sending traffic while other flows are starving. At the tree root the sources at the stream data can be handled, which is usually the problem, and per flow throughput reduces and packet losses in the network. For resolve the optimal operating rate the binary search scheme is used. At a certain minimum constant bit rate all sources are streaming. For each step the throughput at the root is evaluated and the obtainable load at each source is doubled. Doubling the load stops when the throughput of one or more flows drops below the obtainable load. The load is compacted in the subsequent step by half of the preceding incremental load in the similar to standard binary search. In this the reduction is continued till the throughput and the obtainable load matches again. The above two processes of reducing by half and doubling are repeated till the throughput of each flow sink stabilizes to the greatest value. 3.4 Delay Jitter Reduction Techniques: For smoothening the packet delay jitter, a per flow playback buffer is maintained by CMN. For a flow the buffer may depends on the size of the buffer and one-way latency of the path. Larger buffer size introduces delay in the playback of a real time video. It is desirable to have a small bounded delay in order to use a limited playback buffer. In order to reduce the end-to-end delay variations we are designing the following optimizations for the intermediate routers. 3.5 Packet Reordering Schemes: In that schemes middle router records of the packets, this is one of the queue based on the following two criteria. Firstly, lower delay budget needs to be delivered with higher delay budget. Thus, the records based on the delay budget for each packet. Secondly, it is carrying traffic from multiple flows, and the instantaneous through for a particular flow drops below the assigned bit-rate. The packets will experience a larger delay as they are behind in the queue. While increasing the average delay packets only by a small fraction. A router constantly measures the instaneous, rate for each flow and assigns higher priorities to flow with lower instantaneous throughputs. If a particular flow is starving and the rate is below the allocated rate, assign a higher priority to packets of this flow alleviates the problem of flow starvation. 3.6 Early-Drop Scheme: The expected time a packet time a packet in its transmit queue reaches the CMN by using the path latency information to the CMN. Problem formulation in Wireless Mesh Networks: A wireless mesh network is considered as a group of nodes. In this wireless mesh networks we can imagine that the connectivity is exists among a group of nodes. Through the transmission the group of nodes is not interfering with each other and these are working at some scheduling mechanism or some physical layers mechanisms. Multi-Channel Multi-Radio environment is the example of wireless mesh networks. The transmission across the group of nodes may not interfere with its nearest nodes will occur only the channels are assigned accurately between the radios. In another instances the physical layer or MAC layer uses the OFDM. The frequency carriers at each and every node can be transmitted appropriately, and also reduces the interference between the nodes. A wireless mesh network is a model and that model can be implemented as a graph, where is the group of wireless links. And is that graph is considered as a group of wireless links. In this wireless mesh networks we can calculate the capacity of the each and every wireless link. Due to the transmission errors the mean packet loss probability is on a link can be assumed. In the wireless mesh networks we can consider a group of video communication sessions. In this network for each and every video session has a source node and destination nod. There is a set of particular paths denoted by,for each source and destination pair. The video stream is started at source node and the total rate of video stream is and the video streams are surrounded by. By using the specific video recorder and the video sequence are used by the source node, the upper and lower bounds are resolute. The rate of video stream is split across the paths. We can interpret that this path is not selected for a particular path, so we are assigning a particular path rate to zero. In this way rate Allocation is correlates with path selection. Below conditions are must be satisfied for denoting an element in the root vector is, 4. Literature Review 4.1 Channel Video Transmission: The importance of telecommunications across long distances is to exchange information by means of mail, radio, television, telephone and the internet. Samuel Morse sent his primary message over a telegraph line between Washington and Baltimore on May 24, 1844, which opened a latest page in the history of modern telecommunications. By means of improved technologies in the area of computers, telecommunications, semiconductors, and wireless communications are constantly varied the objectives and features. Per every day new applications will be coming in radio transmission. By using the wide range of signals, such as audio, pictures, text, and video people can take pleasure. But nowadays the information exchange must be very much less cost and extensively faster. Therefore there are different trends in sending the information from person to another person in telecommunications. Due to these reasons the speed of exchanging the information is high. Modern telecommunications allow the exchange of information among any one, at any time, and any where. For the representation of video Signals require large amount of data compared to other types of signals, namely text, audio, and images. For the network quality video the bandwidth is concerning 45 megabits per second (MBPS) required by the National Television System Committee (NTSC). For video objects as per the recommendation 601 of the International Radio Consultative Committee (CCIR) calls for a bandwidth of 216 Mbps. Based on the High Definition Television quality images a video object may requires a bandwidth of 880 Mbps and that data is placed across a elevated demand of storage and broadcast requirements. Generally video transmission applications typically require an end-to-end delay constraint. Video Transmission requires higher bandwidth and higher throughput due to the scientific developments. And also the video transmission applications namely video broadcasting, distance learning, video conferencing have the improved recognition. Across the cellular communication in higher data rate cellular networks, such as GSM-GPRS, CDMA and UMTS and video à ¢Ã¢â€š ¬Ã¢â‚¬Å"display-capable mobile devices providing video capabilities to clients. Applications: The video transmission applications are classified into groups because of the nature of video applications. These Video applications determine the protocol environment and the constraints. Video transmission applications can be broadly classified into three categories based on delay constraints. Conversional Applications: These conversational applications include two-way video transmission across Ethernet, LAN, DSL, Wireless, mobile networks and ISDN such as video telephony, video conferencing and distance learning. These applications are characterized by very authoritarian end-to-end delay constraints and typically less than a few hundred milliseconds. They also implicitly need real time decoders and encoders. In a real time wee are using the feed back based source code. Especially for the encoders in another way the severe delay requirements normally limit and allow the computational complexity. For high-latency applications Video download and storage applications: On a server the preen coded code is stored for downloading the video applications. For downloading purpose we are using the reliable protocols namely HTTP, FTP. And in this the application threats are encoded with the bit stream and this is called as regular data file. By using the video encoding concept for an encoder we can reduce the computational complexity. For the optimized video coding it is possible to high coding efficiency. In this we are considering video storage concept also for better delay constraints and computational complexity. In this video storage we can not concern on error resiliency. In traditional video storage we can improve the compression efficiency; it is the ultimate goal of video storage. Video Streaming applications: This type of applications can be working in between in conversational and download applications. The complete video bit stream can be transmitted across the applications and these applications can be exceptional. Normally the preliminary buffering time is few seconds. In video streaming applications when a playback starts that means it is real time and it must be permanent and it is having any interruptions. In this the video stream can be transmitted and preen coded. The video to be streamed is send from a single server, but may be scattered in a multipoint, Point-to-point or even broadcast fashion. 4.2 Elements of video Communication Systems: Below figure represents a block diagram of video communication systems. Video, rate control and decoder are the major components in video communication systems. In video transmission systems there are five important conceptual components. The source encoder is used to compact the video signal into media packets, for presently transmitting the media storage the packets are directly send to the lower layers. In packetization and coding the application layer can accusation. From sender to the receiver the transport layer performs the delivers media and congestion control for the best user experience while sharing the network resources with other users. The packets are delivered through the transport network to the client. Depending on some of the conditions the receiver decompresses and equipment the interactive user controls and delivers the video packets. Video Transmission System Architecture Since from number of years the video transmission system has been great significance because of coding of a signal or compression standards exists. In this the video transmission system compression standards are H.264, MPEG-4 and MPEG-2. Decrease the source redundancy is the best objective for compression. Due to the lossless compression the source reduces the bit rate. For radio transmission applications loss compression may require. Almost all communication systems have restricted bandwidth. These two requirements are conflicting and they establish the tradeoff between source and channel encoding. For representing the video sequence compression reduces the number of bits by exploiting both secular and spatial redundancy. On the other hand, to alleviate the effect of channel errors on the decoded video quality, redundancy is further back to the compressed bit stream. For each video frame and video unit within the frame the source bit rate is forced or shaped. Depending on the channel state information (CSI) is reported by lower layers such as application layer and transport layers. Based on the channel arte the bit rate is estimated. The main theme of video streaming system is the information exchange across different layers. The network block represents the communication path between the sender and the receiver. This path may include subnets, routers and wireless links. The network has several paths that support QOS. Because of some problems the packets may be dropped due to the problem of congestion in a wireless networks. At the transport/application layers FEC is used for parity checks and also compact the packet losses. If the application is allowed the vanished packets might be retransmitted. In the concept of de-packetizing the application and transport layers are responsible at the receiver side. The video decoder decomposes the video packets and displays the video frames in real time. This video is displayed continuously without distortion at the decoder. The video decoder usually employs error concealment techniques to alleviate the effect of packet loss. To obscure the lost information the concealment strategies will exploit the spatio-temporal correlations in the received video. 4.3 Network Interface: The model between networks and applications known as network interface, it consists of five important layers such as Application layer, network layer, transport layer, the physical layer and the link layer. The main utility of network layer is to compress the packetization video stream and send these packets are send across the network. In the network interface the common issues are channel coding including retransmission and monitoring the network condition. The QoS parameters are used to guide the transmission priority used in schemes such as FEC, power adaption and retransmission. This network interface part the system design should be focused in the research o video transmission. 4.4 Network protocols: IP is the most commonly used network layer protocol. The IP protocol should provide the connectionless delivery services, by means of each packet can be routed separately and independently, regardless of the source and destination. IP provides the best efforts and variable services across the network. Below fig shows the protocol stack of IP Network. In this IP network communication protocols are used in wide range. Illustration of protocol layers The TCP protocol is operated at the transport layer and it is un connection oriented protocol and it supplies reliable services. Through the ACK the TCP provides reliability. This TCP protocol has own congestion control mechanisms. UDP is the alternative of TCP which together with IP is sometimes called as UDP/IP. UDP is a connection oriented protocol. UDP may not provide the reliable transmission across the network. It does not provide the sequencing of packets at the time of data arriving. Secondly, UDP may not retransmit the loss of packets. However TCP provides unbounded delay because of constant retransmission. In this the UDP widely uses the video applications. For constraining the bit rate to the applications, the congestion control which is additional should be deployed on the above of UDP when UDP/IP is being used. UDP is appropriate for video applications due to their strict delay constraint along with the QoS requirements. A checksum capability is available in UDP to verify that the data has been arrived correctly or not. The transmission is done only for the packets which are correct to the application layer. The wired IP networks should consider this where due to buffer overflow, all packets will be lost. The received packets contain bit errors in a wireless IP network. Here, the packets which are having bit errors are useful for the application. 4.5 Error-Resilient Video Coding Along with the source signs and achieves entropy if the video source coding removes all the idleness then a solo error appears at the source and will initiate a immense amount of deformation. To the channel errors the supreme source coding is not vigorous in further terms. Designing an perfect model or near-ideal source code moreover is an convoluted one particularly for video signals in view of the fact that the video source signals have memory recollection and time varying, and at some stage in encoding their statistical distribution possibly will not be presented (mainly for live video applications). Following source coding as a result the redundancy definitely remains constant. To a certain extent than keeping the entire concentration for removing the source redundancy completely, we should utilize that source. As already discussed in chapter 3, alternatively, at both the source-and channel-coding levels we realized that the nature of JSCC is to optimally add redundancy. Thus the lasting redundancy involving between the source symbols desires to be regarded after source coding as an inherent form of channel coding [94]. Channel coding and source coding is now and then can barely be differentiated, essentially, when JSCC was concerned. The redundancy additionally added should avoid the inaccuracy transmission if we speak in general about that and it confines the alteration caused by the packet losses, and will smooth the progress of error detection, concealment and recovery at the receiver side. To get the most out of the error-resilience competence, error-resilient source coding optimally we need to add redundancy all through the source coding in order to adjust the application necessities such as computational capacity, channel characteristics and delay requirements. We momentarily sketch out the video density principles, introduced the required vocabulary and emphasize the solution technologies before us reviewing the error-resilient source coding system apparatus for conversation of error-resilient source coding. In conclusion we spotlight on the argument of optimal mode collection, which represents the error-resilient source coding methodology that is used all the way through this monograph as an illustration for illustrating how to attain finest error-resilient coding. 4.6 Video Compression Standards We in detailed way talk about one of the most extensively used video coding methods, in this segment, that of hybrid-based motion-compensated (HBMC) video coding. More than a few winning principles have been emerged appreciation to the pains from the academy and production in the earlier period, appropriate to the important major developments in digital video applications. The moving picture experts group (MPEG) family and the H.26ÃÆ'— family are the two most important families of video compression standards. These standards deal with a large collection of issues such as bit rate, picture quality, complications and error resilience are the submission oriented. The H.26/AVC is the latest standard that is pointing to provide the state-of-art compression technologies. The ITU H.26L and the MPEG-4 committee in 2001 which is the outcome of the combination which is known as JVT (Joint Video Team), and it is a reasonable expansion of the preceding standards adopted by the two groups. So, therefore it is also called as AVC, MPEG-4, or H.264 part 10 [105]. On behalf of comparison and for over viewing the video standards, let us see [106]. It is very important to have a look of the decoder only which is particularized by all the standards, i.e., they normalize the syntax for the illustration of the programmed bit stream and characterize the decoding progression, but leave considerable flexibility in the plan of the encoder. For reducing the latitude in optimizing the encoder for particular applications is given permit ion for standardization by this approach [105]. Depending on the HBMC approach every part of the exceeding video compression standards are mentioned and distribute the same block diagram, as revealed in the fig. 4.1. The connected luma and chroma samples (16ÃÆ'— 16 region) in each and every video frame are offered by block-shaped units which are called as MBs (macro blocks). The core of the encoder is action motion compensated prediction (MPC) as shown in the fig. 4.1(a). Motion estimation (ME) is the opening step in MCP, which is pointing to locate the region in the past reconstructed frame that top most matches each and every MB in the present frame. Involving the MB and the prediction region the offset is well-known as the motion vector. From the motion field the motion vectors which are differentially entropy determined. By applying the motion field to the earlier reconstructed frame where the mentioned edge is predicted and the motion compensation (MC) is the subsequent step in MCP. The displaced frame (DFD) which is acknowledged as the prediction error is obtained by minimizing the reference frame from the existing frame. The three chief blocks that is quantization, entropy coding and transform which are processed by the DFD by following MCP. For using a transform with a chief reason is to decorrelate the information so that the connected energy in the transform domain is further efficiently represented and thus to encode the ensuing alter coefficients are greatly simpler. The transforms in image and video coding the distinct cosine transform (DCT) is one of the most extensively or broadly used which was suitable to its elevated transform coding gain and little computational difficulty. The compression gain is the most important cause for the quantization which introduces the defeat of data in an order. The entropy encoded is the quantized coefficients, e.g., by via arithmetic coding and Huffman. The DFD is the most primary one which is divided into 8 ÃÆ'— 8 blocks; the DCT is then applied to every block with the ensuing coefficients quantized. In a specified MB can be intraframe coded, in th e majority block-based motion-compensated (BMC) values, the motion compensated prediction is used by intraframe code, or basically simulated from the in the past decoded frame. Error resilient video-coding Hybrid block-based motion-compensated video (a) encoder (b) decoder. These prediction modes are denoted as Intra, Inter and skip modes, respectively. Designed for each MB the coding and Quantization are performed in your own way according to its mode. 5. Joint Source-Channel Video Transmission In favor of every MB in consequence the coding parameters are normally or characteristically represented by its prediction mode and the quantization parameter. As exposed in the figure, at the decoder, the inverse DCT (IDCT) is functionalized to the quantized DCT coefficients to acquire a reconstructed adaptation of the DFD; the reconstructed version of the recent frame is obtained by totaling the reconstructed DFD to the motion-compensated prediction of the nearby frame based on the in the precedent reconstructed frame. The wavelet representation provides multiresolution/multiscale disintegration in addition to DCT-based video firmness of a signal with localization in both the time and the frequency. On behalf of both videos and still images one of the most compensation of wavelet coders is that they are liberated of blocking artifacts. They typically present continuous data rate scalability in count. The discrete wavelet transform (DWT) and subband decomposition have gained greater than before fame in image coding outstanding the considerable donations in [107,108], JPEG2000[109] and others all through the previous decades. Active research has also been applied recently by the DWT to video coding [110,111,112,113,114,115]. Suband video codecs or 3D wavelet have established special concentration due to there inbuilt feature of occupied scalability from the above discussions. The disadvantage of these approaches has been their poor coding efficiency caused by incompetent sequential are filtering recently. Towards the standardization of wavelet-based scalable video coders a major breakthrough which has greatly improved the coding efficiency led to renewed efforts which have come from the contribution of combining lifting techniques with 3D wavelet or subband coding [116,117]. 5.1 Error-Resilient Source Coding For supporting error resilience we first review the video source-coding techniques in this section. Next to that, we gave the detailed review of the features defined in the H.263 and H.264/AVC standards that support error-resilience. We will not discuss MPEG-4 separately as the error-resilience modes defined. 5.1.1 General error-resilience techniques Error resilience is achieved by adding up redundancy bits at the source coding level, which is perceptibly reduces the coding competence as mentioned above. How to optimally add reluctant bits to control the tradeoff between coding efficiency and error resilience is the resulting question. We need to identify the steps in the source-coding in order to address the question which results in corrupted bits causing significant video quality degradation. Motion compensation introduces sequential dependencies stuck between frames which were discussed in this chapter 2, which leads to the errors in one frame propagating to prospect frames. Apply of the predictive coding for the DC coefficients and motion vectors introduce spatial dependencies surrounded by the image in totaling up. An error in one part of a picture will not only affect its will not only affect its neighbors in the same picture because of the use of the motion compensation but also the subsequent frames. To terminate the dependency chain the solution is to propagate the error. All these techniques are designed for this purpose like intra-MB insertion, independent segment decoding, reference picture selection (RPS), video redundancy coding (VRC) and multiple description coding. To add the redundancy at the entropy coding level is the second approach or technique towards the error resilience. For these some of the examples include like reversible VLCs (RVLCs), data parti tioning method or procedure and resynchronization, which can assist to maximum the error propagation outcome to a slighter section of the bit-stream on one occasion when the error is detected. The error recovery or concealment of the errors effects, such as flexible macro block ordering (FMO) was helped by the third type of error-resilient source coding tools. Even though, lastly, the scalable coding was considered chiefly for the reason of the communication, next to the computation and display scalability in assorted environments which can afford a way for error resilience by utilizing irregular error protection (UEP) in the course of prioritized QoS transmission. The techniques which provide the error resilience was next provided by us with few more details. Data partitioning: This method functionality is appropriate for wireless channels where the bit inaccuracy rate is comparatively elevated. One MB of data including the differentially encoded motion vectors and DCT coefficients are packetized together in traditional packetization followed by the data of the next MB. In one packet the data of the same type of all MBs are grouped together to form a logical unit with an additional synchronization marker inserted between different logical units, yet, by using the data partitioning mode. Higher level of error resiliency is provided by this mode which enables a finer resynchronization within the packets. The synchronization at the decoder can be reestablished that is, when an error is detected and when the decoder detects the following secondary market, thus only by discarding the logical unit in which an error occurs, unlike the traditional packetization, and in that packet following the detected error where an error causes the decoder to discard all the dat a for all MBs. The figure 4.2 explain one characteristic data partitioning syntax defined in MPEG-4. Each and every slice is to be divided into up to three unlike partitions which were allowed by H.264/AVC syntax. In MPEG-4 and H.263++ Annex V also was defined by this functionality with different syntax definitions. Different importance is given classifiably or characteristically by these logical units in a single packet. For instance, the packet headers usually characterize the most significant unit which was followed by the motion vectors and DCT coefficients. When error suppression is used by the data partitioning which can be advantageous. Joint Source-Channel Video Transmission FIGURE: Packet structure syntax for data partitioning in MPEG-4 RVLC: In both forward and backward directions the reversible VLCs enable decoding, when the errors are detected. A large amount of data can be salvaged in this way in view of the fact that only the portion among the primary MB in which an error was detected in both the forward and backward directions are discarded in this way. By using a symmetric code table this mode enhances the error resiliency by sacrificing the coding effectiveness. The DCT coefficients are still coded with the table used in the baseline but RVLCs are defined in H.263++ Annex V [120], where packet headers and motion vectors can be encoded using RVLCs, from the time when the corruption of DCT information frequently has fewer collision on the video quality compared to packet header and motion information. Resynchronization: This mode is targeted or pointed at synchronizing the operations of the encoder and decoder as the name implies when the errors are detected in the bit stream. It is usually combined with the data partitioning. In MPEG-4 it is defined that there are a number of or various approaches to resynchronization. Among them the video packet is one of the most important and significant approach, which is very similar in principle to the slice structured mode in H.263+. And next preceding one is the fixed interval synchronization approach, in the bit stream which requires the video packets to start only at allowable and fixed intervals. Scalable Coding: A hierarchy of bit streams is produced by layered video coding and scalable coding to the overall quantity where the different parts of an encoded stream have unequal contributions. For example, the available bandwidth is partitioned to provide UEP for different layers with different consequence, where, the scalable coding has inherent error-resilience profits, particularly if the layered property can be exploited in transmission. This technique is normally referred to as layered coding with transport prioritization [121]. Multiple descriptions coding (MDC): A signal is coded into a number of disconnect or separate bit streams, where the MDC refers to a form of compression and each of which is referred to as a description. Two significant and important characteristics are discussed with this MDC: they are as follows, the first and the primary one is each and every description can be decoded independently that is not depending on whichever individual one to give a functional rebuilding of the unusual signal. Combining the more appropriately received descriptions which improves the decoded signal value is the second one. Thus, prioritized transmissions are not crucial by the MDC. The descriptions are self-determining by each one another, is a point that was mentioned worth fully and are characteristically given equivalent significance approximately. Video redundancy coding (VRC): The error resiliency is supported by the VRC (Video redundancy coding) by limiting the temporal dependencies between the frames introduced by motion compensation. The video sequence is dived into two or more subsequences in this scenario which was named as the threads. The threads are encoded independently or individually with out depending on any other coding by each other and every frame are assigned to one of the threads in a round-robin fashion. All threads converge into a so in regular intervals-called Sync frame, which serves as the synchronization point from which the latest threads begin or start. Without degradation this method habitually outperforms I-frames insertion while frames can for all time be generated from the integral thread, if in the least. The error-resilience features defined in H.263 and A.264/AVC was momentarily discussed by us in the next section. The common techniques or methods described above will not be frequently repeated still if they are enclosed by the two standards 5.2 Error-resilience features in H.263+/H.263++/H.264 In H.263+, H.263++ and H.264/AVC a lot of numerous and various features were defined by pointing at supporting error resilience. Slice Structure: Replacing the GOB concept in baseline H.263 is defined in H.263+ Annex K in this mode. Group of MBs are considered in each and every slice in a picture and these MBs can be sorted moreover in scanning order or in a rectangle shape. In various ways we can justify the reason why this mode provides error resilience. The initial one is, without using the information from other slices (except for the information in the picture header) the slices is independently decodable, which helps to limit the region affected by the errors and minimize or decrease the error propagation. And the second one is, for each MB the loss probability is reduced or decreased fatherly thus the slice header itself serves as a resynchronization maker. The slice sizes are extremely stretchable, and can transfer and received in any order relative to one another, which can help to reduce or decrease the latency in lossy environment, is the third one described. Independent segment decoding: In H.263+ Annex R this independent segment decoding mode is defined. The picture segment (distinct as a slice, a GOB, or a numeral number of GOBs) boundaries are imposed in this type by not allowing the dependencies transversely by the fragment. Between well-defined spatial parts of a picture, this mode limits the error propagation and thus enhancing the error-resiliency capabilities. Reference picture selection (RPS): In H.263+ Annex N this RPS mode is defined which allows the encoder to select an earlier picture rather than the previous picture as the reference in encoding the present picture. Rather than the whole or complete pictures the RPS mode can also be applied to individual or independent segments. One method or technique that can be achieved by this mode, that is VRC mode technique which was discussed in the above section. The error-resilience capability can be greatly enhanced by using this mode if a feedback channel is not available. For example, the encoder may select not to use this picture for future prediction, if the sender is formed by the receiver through a NACK that one frame is lost or corrupted during transmission, and instead choose an un-infected picture as the reference. Flexible macro blocks ordering (FMO):   Each and every slice group is a set of MBs which are defined by a macro block to slice group map, in the H.264/AVC standard, which defines to which slice group each macro block belongs. Interleaved mapping is one of the examples, where the MBs in one slice group can be in any scanning pattern and the group can consist of one or more number of foreground and background slices and the mapping can even be a checker-board-type mapping. As a consequence, a very bendable tool was provided by FMO to assemblage the MBs from different locations into a single slice committee. Note that the forward error-correction mode (Annex H) is also considered for supporting error resilience that besides the above description [122]. 5.3 Optimal Mode Selection There are an excellent number of modes defined in each and every video standard to provide the error-resilience as described above. To work under different conditions such as different applications with different bit rate requirements are designed with different modes, or different infrastructures with different channel error rates and types. How to optimally choose those modes in practice is one question that arises in every ones mind by seeing the above discussion. We limit our discussion on optimal mode selection that includes prediction mode (inter, intra, or skip), but we are not going to expand the discussion on this topic, in this section, and quantization step size for each MB or packet. The skip mode can be regarded as a extraordinary Inter mode while no error remains and motion information are coded. These mode choice algorithms have conventionally paying attention on RD optimized video coding for error-free environment and on a particular frame BMC coding (SF-BMC) as well. This mode is divided into two trends. On mode selection using the multiple-frame from BMC (MF-BMC) is one of the most vital works. These approaches choose the reference frame from a group of previous frames than SF-BMC. The correlation between the compound number of frames is capitalized by MF-BMC methods or techniques to improve the compression efficiency and enlarge error-resilience, at the price of enlarged computation and superior buffers at both encoder and decoder. With a given bit budget our aim is to achieve the best video delivery quality. By minimizing the estimated distortion D with the given bit budget R this can be mathematically explained by selecting unlike modes, where D is calculated taking into account the channel errors. It is perceptive to observe that for a given plan the source distortion directly relates to coding efficiency, while the channel distortion very much relates to the error resilience. Speaking in general, to encode the DFD a fewer bits are normally needed than its corresponding the current region or place, since the DFD has a lesser energy and entropy. Intercoding has privileged compression competence for this cause and therefore fallout in worse source coding distortion than intra coding for the identical bit budget. In the mode choice problem with regard to the responsibility of the quantizer, approximately talking, the slighter the quantization step size, the lesser the source distortion but the bigger the channel distortion it may well cause (for the equivalent level of channel safeguard). 6. Channel Modeling and Channel Coding The nature of the JSCC is to optimally add redundancy at the source coding level as discussed earlier, which is known as error-resilient source coding and at the channel coding level, which is known as channel coding. We have discussed about the formal also. Now in this chapter let us see or study later topics. Let us see initially channel models and channel-coding methods or techniques. So let us focus on the models and the methods or techniques used for video transmission applications. 6.1 Channel Models Time-varying nature of channels is the most important channels that are derived from the channels. The growth or the expansion of mathematical models that exactly take into custody for the properties of a transmission channel is an exceptionally demanding but exceptionally significant subject. The concert of JSCC in general relies greatly on the precision of the channel state information to calculate approximately when the end system design required being adaptive to the altering channel conditions where the consequence stems from the truth that for improved video delivery performance. At the application layer, the QoS is usually measured objectively by the end-to-end distortion for these video applications. According to the probability of source packet loss and delay is calculated by end-to-end distortion as discussed earlier. Therefore for the video applications there are two fundamental properties of the communication channel as seen at the application layer with the probability of packet loss and delay allowed for every packet to reach the destination. The channel can be modeled at different layers for the video transmission over a network through various or several network protocol layers. How ever, the QoS parameters at the lower layers may not always or regularly reflect the QoS requirement directly by the application layer. The truncation and the packet loss appears where channel errors normally appear and this might not be a problem for the wired network. The channel is modeled in a continuous way at the network layer (i.e. the IP layer) for the wired channels like internet, given that the packets with errors are discarded at the link layer and are consequently not forwarded to the network layer. How ever, for wire less channels, beside the packet loss and packet truncation, the common type of error is bit error. Hence for these wireless networks, the mechanisms that map the QoS parameters at the lower layers to those at the application layer which are particularly needed in order to coordinate the effective adaptation of QoS parameters at the video application layer. 6.1.1 Internet: The truncation and the packet loss are the representative forms of channel errors in the internet. Queuing delays in the network can be an important delay component which is seen in addition. As a result, the internet can be modeled as a self-regulating time-invariant packet scoring through channel with unsystematic delays. A packet is characteristically considered discarded and loss in the factual time video applications if it does not arrive at the decoder before it is intended playback time. Therefore, two components are made by the packet loss probability. Those are the packet loss probability in the network and the probability that the packet experiences unnecessary delay. The overall probability by combining these two factors for the packet loss k is given by à ?k = _k + (1 à ¢Ã‹â€  _k)  · ÃŽÂ ½k, Where _k is theprobability of packet loss in the network and ÃŽÂ ½k is the probability of packet loss due to the unnecessary delay. We have ÃŽÂ ½k = Pr {à ¢Ã‹â€ Ã¢â‚¬  Tn (k) à Ã¢â‚¬Å¾}, where à ¢Ã‹â€ Ã¢â‚¬  Tn (k) is the network delay for packet k, and à Ã¢â‚¬Å¾ is the utmost acceptable network delay for this packet. This is shown in the fig. 5.1, where the probability density function (pdf) of network delay à Ã¢â‚¬Å¾ is plotted by taking into account packet loss. A Bernoulli process or a two state or a kth à ¢Ã¢â€š ¬Ã¢â‚¬Å"order Markov chain can be used as an example where the packet losses in the network à ¢Ã¢â‚¬Å¡Ã‚ ¬k can be modeled in a number of ways. The fig. 5.2 shows an example of two state Markov model with channel states h0 and h1. The channel state transition matrix is defined as A=|1-p p| q q-1, where p and q are the probabilities of channel state probability is therefore computed as for states h0 and h1 respectively. Probability density function of a network delay taking into account packet loss. A two state Markov model. To follow a self-similar rule where the underlying distributions are greatly tailed to a certain extent than following a poisson distribution where the network delay may also be indiscriminately varying. The shifted Gamma distribution is one of the comparatively uncomplicated models are used for characterizing the packet delay in the network. 6.1.2 Wireless Channel Wireless channels show signs of upper bit error rates, when compared to their wire-line counterparts, characteristically have a lesser bandwidth and know-how the multipath vanishing and investigation belongings. We will not address the particulars of how to model wireless channels at the physical layer was discussed in this subsection. In this here we can spotlight on how the physical-layer channel state information can be translated into the QoS parameters such as holdup and packet loss at the link layer. The link layer packet loss probability depends on the packetization schemes used at the transport or link layer can be characterized how the application layer packet loss probability (the QoS parameters needed to calculate video distortion). The wireless channel at the IP level can be treated as a packet erasure channel, as is looked by the application for the IP-based wireless networks, along with the internet. The transmission power used in transferring or sending every single packet and the channel state information (CSI) can be modeled as a function of the transmission power by the probability of the packet loss in this section setting. By increasing the transmission power will enlarge the received SNR in particular way, for a permanent transmission rate, and effect in a less significant probability of packet loss. So, therefore, this connection or the relationship can be modeled methodically or determined empirically. An analytical model based on the notation of outage capacity is used in the earlier example of the former. And in this method, whenever the fading realization results in the channel having a capacity less than the transmission rate a packet is lost, i.e., à ?k = Pr(C (Hk, Pk) à ¢Ã¢â‚¬ °Ã‚ ¤ R), where C is the Shannon capacity, Hk is the random variable representing the channels fading, Pk is the transmission power used for packet k, and R is the transmission rate (in source bits per second). We understood independent bit errors based on just right the interleaving are resultant from the above discussions. Delay and complexity both are introduced by Interleaving and perfect interleaving is not achievable in a practical system, especially for the real time applications. All these video, internet, former interleaving concepts plays a vey significant role in the real time applications while every one is working in big organizations. Also, in addition, the radio channel BER is fundame ntally given in (5.4) and (5.5) is normally the average BER which is along term parameter. The models of Markov are being used widely for explaining the nature of the channel errors. For instance, consider two-state Gilbertà ¢Ã¢â€š ¬Ã¢â‚¬Å"Elliott model that is classical. In this model, one good state as well as one bad state with various BERs that are associated represents the states of the channel. The figure 5.2 illustrates this in which bad state is represented by h0 and good state is represented by h1. 1/q is the average bursty length. FSMC which means a finite-state Markov channel model is the most accurate model for characterizing a fading channel. For instance, FSMC modeled a Rayleigh flat-fading channel for packet transmission system in which various bit error rates or receiver SNR characterizes each state. The states average duration is almost equal to constant and it relies on the speed of the channel fading. The constant determines the number of states for ensuring that all the packets which are being received are present in one state completely and in the current state, the packet followed is present. It may be present in one among the two neighboring states. Identifying the probability of packet loss on the top of the link layer by using the physical-layer channel model is shown in the above discussion. Queuing delay is the other QOS parameter which should be considered along with the packet loss for the transmission of real-time video over wireless networks. The method of deriving the link-layer QOS parameters like delay and bit-rate from the physical-layer channel parameters is not reflected explicitly by the physical-layer channel models. The analysis of the connections queuing behavior is required for the deviations of the QOS parameters of the link-layer. Thus the above process is difficult to do. For achieving this, it is necessary to have a link-layer channel model which characterizes the QOS parameters of the link-layer directly particularly the queuing delay behavior. Two EC functions models a wireless link in the effective capacity model. The functions are the probability of a nonempty buffer which is ÃŽÂ ³ = Pr {D (t) 0} and the connections QOS exponent, ÃŽÂ ¸. The marginal cumulative distribution function (CDF) of the wireless channel that is underlying is reflected by ÃŽÂ ³. The Doppler spectrum of the physical-layer channel that is underlying is corresponded by ÃŽÂ ¸. The link-layer channel model is characterized by the model which consists of a pair of functions {ÃŽÂ ³, ÃŽÂ ¸}. From the physical-layer channel model, two functions are estimated by using an algorithm that is simple and efficient. 7. System design: 8. Screen shots: 9. Conclusion: Both source coding and channel coding are separately designed according to the theory of Shannons separation and can achieve the whole optimality. The main aim of source coding is to remove the redundancy from source and to achieve entropy. Whereas the main aim of channel coding is to achieve the transmission that is error-free with the involvement of redundancy. The error-free transmission of the source can be achieved when the source rate is less when compared with the channel capacity. If not, lowest achievable distortion will be given to the bounds of the theoretical by the rate distortion theory. For practical systems like video communications, the above hinges on the ideal channel coding is not realistic. 10. References: [1] J. G. Apostolopoulos, T. Wong, W. Tan, and S. Wee, On multiple description streaming in content delivery networks, in Proc. IEEE INFOCOM, New York, Jun. 2002, pp. 1736à ¢Ã¢â€š ¬Ã¢â‚¬Å"1745. [2] Handbook of Evolutionary Computation, T. Back, D. Fogel, and Z. Michalewicz, Eds. New York: Oxford Univ. Press, 1997. [3] A. C. Begen, Y. Altunbasak, and O. Ergun, Fast heuristics for multipath selection for multiple description encoded video streaming, in Proc. IEEE ICME, Jul. 2003, pp. 517à ¢Ã¢â€š ¬Ã¢â‚¬Å"520. [4] A. C. Begen, Y. Altunbasak, and O. Ergun, Multi-path selection for multiple description encoded video streaming, EURASIP Signal Process.: Image Commun., vol. 20, no. 1, pp. 39à ¢Ã¢â€š ¬Ã¢â‚¬Å"60, Jan. 2005. [5] H. Chernoff, A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations, Ann. Math. Statist., vol. 23, pp. 493à ¢Ã¢â€š ¬Ã¢â‚¬Å"507, 1952 .[6] Z. Duan, Z.-L. Zhang, Y. T. Hou, and L. Gao, A core stateless bandwidth broker architecture for scalable support of guaranteed services, IEEE Trans. Parallel Distrib. Syst., vol. 15, no. 2, pp. 167à ¢Ã¢â€š ¬Ã¢â‚¬Å"182, Feb. 2004. [7] A. Elwalid, D. Heyman, T. V. Lakshman, D. Mitra, and A.Weiss, Fundamental bounds and approximations forATMmultiplexers with applications to video teleconferencing, IEEE J. Sel. Areas Commun., vol.13, no. 6, pp. 953à ¢Ã¢â€š ¬Ã¢â‚¬Å"962, Aug. 1995. [8] D. Eppstein, Finding the shortest paths, SIAM J. Comput., vol. 28, no. 2, pp. 652à ¢Ã¢â€š ¬Ã¢â‚¬Å"673, Aug. 1999. [9] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness. NewYork: W. H. FreemanCo.,1979. [10] T. Kuang and C. Williamson, A measurement study of RealMedia audio/video streaming traffic, in Proc. SPIE ITCOM 2002, Boston, MA, Jul. 2002, pp. 68à ¢Ã¢â€š ¬Ã¢â‚¬Å"79. [11] S. Mao, Y. T. Hou, X. Cheng, H. D. Sherali, and S. F. Midkiff, Multi-path routing for multiple description video over wireless ad hoc networks, in Proc. IEEE INFOCOM, Miami, FL, Mar. 2005, pp. 740à ¢Ã¢â€š ¬Ã¢â‚¬Å"750. [12] S. Mao, S. Kompella, Y. T. Hou, H. D. Sherali, and S. F. Midkiff, Routing for multiple concurrent video sessions in wireless ad hoc networks, in Proc. IEEE ICC, Seoul, Korea, May 2005, pp. 1229à ¢Ã¢â€š ¬Ã¢â‚¬Å"1235. [13] S. Murthy and J. J. Garcia-Luna-Aceves, Congestion-oriented shortest multi-path routing, in Proc. IEEE INFOCOM, San Francisco, CA, May 1996, pp. 1038à ¢Ã¢â€š ¬Ã¢â‚¬Å"1036. [14] I. Norros, On the use of fractional Brownian motion in the theory of connectionless networks, IEEE J. Sel. Areas Commun., vol. 13, no. 6, pp. 953à ¢Ã¢â€š ¬Ã¢â‚¬Å"962, Aug. 1995. [15] P. Papadimitratos, Z. J. Haas, and E. G. Sirer, Path set selection in mobile ad hoc networks, in Proc. ACM MobiHoc, Lausanne, Switzerland, Jun. 2002, pp. 1à ¢Ã¢â€š ¬Ã¢â‚¬Å"11. [16] E. Setton, X. Zhu, and B. Girod, Congestion-optimized multi-path streaming of video over ad hoc wireless networks, in Proc. IEEE ICME, Taipei, Taiwan, Jun. 2004, pp. 1619à ¢Ã¢â€š ¬Ã¢â‚¬Å"1622. [17] H. D. Sherali and C. H. Tuncbilek, A global optimization algorithm for polynomial programming problems using a reformulation-linearization technique, J. Global Optim., vol. 2, no. 1, pp. 101à ¢Ã¢â€š ¬Ã¢â‚¬Å"112, 1992. [18] H. D. Sherali and W. P. Adams, A Reformulation-Linearization Technique for Solving Discrete and Continuous Nonconvex Problems.Boston, MA: Kluwer Academic, 1999. [19] K. Stuhlmuller, N. Farberand, M. Link, and B. Girod, Analysis of video transmission over lossy channels, IEEE J. Sel. Areas Commun., vol. 18, no. 6, pp. 1012à ¢Ã¢â€š ¬Ã¢â‚¬Å"1032, Jun. 2000. [20] W. Wei and A. Zakhor, Path selection for multi-path streaming in wireless ad hoc networks, in Proc. IEEE ICIP, Atlanta, GA, Oct. 2006, pp. 3045à ¢Ã¢â€š ¬Ã¢â‚¬Å"3048. [21] Z.-L. Zhang, Z. Duan, and Y. T. Hou, On scalable design of bandwidth brokers, IEICE Trans. Commun., vol. E84-B, no. 8, pp. 2011à ¢Ã¢â€š ¬Ã¢â‚¬Å"202 5, Aug. 2001.17

Wednesday, May 6, 2020

How to Plan for a Listening and Speaking Lesson Free Essays

How to plan For a Listening Skill Lesson Teacher | Observer | Date | Lesson number | Class level Elementary| Number of students| Timetable fit| Previous lesson: Reading and speaking skillThis lesson: Listening and speaking skillNext lesson: Listening and speaking skill| for the teacher)| To provide an engaging lessons for students and improving their listening skill. * To monitor closely and make sure the lesson is successful. | Objectives (for the students)| By the end of the lesson the students will: 1) Have practiced listening for gist of a radio program. We will write a custom essay sample on How to Plan for a Listening and Speaking Lesson or any similar topic only for you Order Now 2) Have learnt the vocabulary related to professions/jobs. Language AnalysisForm Meaning PronunciationLexis in text Guess (v, present simple) supposeQuiz (n, sing) a test of knowledgeTeam (n, a group of playersUnemployed (v, past) Writer (n, sing)Guest (n, sing) a person who is invited to take part in a function by other personDepends (v, simple present) relyUniform (n, sing)Special qualifications (special= adjective, qualifications = noun, pl)A lot of (phrase) manyActor (n, sing) role player in drama or filmProfessional (adj) a person who has a professionFootballer (n, sing) who plays footballDo you work†¦? Where? When? How? Topic vocabularyJobs/professions| Assumed knowledge| The students know about different professions like doctor, footballer, and artist. They also know the difference between profession and hobby. | Anticipated problems | 1) This is a radio program some students may not understand it. Students may want to repeat it. 2) Weaker student may not understand the phrases and some of the vocabulary. 3) This can prevent them from completing the comprehension tasks. Solutions| 1. Check elicit before students listen. 2. Include review of vocabulary at the beginning. | Materials | Radio program: Guess the jobFlash cards/pictures of various people doing different jobsOther handouts: (comprehension Qs) – teacher’s own| | Interaction| Procedure| Rationale| 7 mins4 mins3 mins4mins| T-SS-ST-SST-S(pairs/triplets)| 1. Context setT elicits vocabulary related to jobs using picture. Drill if required. A) Where do people work? B) How do they work? When do they work? What kind of information you need to find out what is somebody’s job. Feedback- Pre-teach guess, Quiz, a lot of, unemployed, special qualifications. 2. Prediction taskStudents look at the picture and a) say what they can see and b) what is going on in the program. Feedback- teacher clarifies/elicits and write a brief summary on the board. | To motivate the students so that they take part in the lesson. To prepare the students for what is coming up. | Lesson Content Timing| Interaction| Procedure| Rationale| 3 mins3 mins3 mins 3 mins | SS-Ss-sT-SS| 3. 1st Listening (gist)Students listen to the program and tell whether they recognized the vocabulary. Did they find program what they have guessed before similar to the written on the board? Task- students write the answers in y/n on the handouts. Check with the partner. Feedback-check as class. | To know that the students have understood the program. | How to cite How to Plan for a Listening and Speaking Lesson, Papers

Thursday, April 30, 2020

Pepcid AC Case Study free essay sample

Developed and commercialized by Merck, Pepcid is a prescription drug for treatment of heartburn. Different from regular antacids that just neutralize acids in the stomach, Pepcid belongs to a class of drugs known as H2 receptor antagonists, which reduce stomach acid secretion by blocking the histamine H2-receptor on the cells producing gastric acid. Third in the H2’s class to enter the prescription drugs market, Pepcid was never able to reverse course and take market leadership from competing products such as, Zantac from Glaxo and Tagamet from SmithKline. In recent years, the possibility of developing a lower dosage form of Pepcid for the OTC market became an attractive business proposition. Merck was not alone in this venture, all major competitors in the H2 receptor antagonists market entered in a race to get FDA approval for a lower dosage version of its original prescription drugs, including Glaxo and SimthKline. In order to gain regulatory approval, drug makers must prove safety and efficacy of the medication. Furthermore, the willingness of consumers to comply with directions specified on the product label is also an important consideration. Nine of the top-ten OTC brands introduced since 1975 were formerly prescription only drugs. A famous example of successful prescription to OTC switch was the pain killer Advil. Not experienced in bringing prescription drugs into the OTC market, Merck joined forces with Johnson Johnson, which had extensive experience in the consumer products market. The result was the creation of a mutually beneficial alliance known as JJM. Being the first entrant into a new OTC market would present JJM with a unique opportunity to potentially becoming the market leader in the H2 receptor antagonist OTC market. The challenge for JJM, was obtaining FDA approval for Pepcid AC, the OTC version of Pepcid, ahead of its competitors. Tagamet was running head-to-head with Pepcid, trailed by Axid and Zantac, which appeared not to have the same level of conviction to becoming first to market. In preparation for the filing with the FDA, JJM conducted clinical studies to support the claims of prevention and treatment of heartburn. Conversely, SmithKline adopted a different strategy, claiming only treatment efficacy. Shortly after recommending against approval of Tagamet’s OTC drug, the FDA advisory committee also recommended against Pepcid’s approval and stated that JJM, â€Å"failed to show Pepcid, in its low dosage form, either prevented or provided relief from heartburn. † 2- Problem Identification According to data provide by IMS America (Table A), in 1993 Pepcid ranked sixteenth among prescription drugs in the U. S. and third among H2 receptor antagonists. The market leader for prescription H2 receptor antagonists was Zantac, followed by Tagamet at a distant second place. Considering JJM’s relatively low proportion of market share,and the fact that it has never been able to challenge Zantac’s leadership, becoming the first in H2 class to enter the OTC market was critical for JJM’s ambitions of becoming the market leader in the highly lucrative heartburn treatment market. Another variable in this equation was the dual claim indication that JJM is planning to file. Being able to increase Pepcid AC’s customer perceived value by claiming prevention and treatment would allow JJM for product positioning within higher price range. Considering that JJM already has a presence in the antacid OTC market with Mylanta, having Pepcid AC in a higher price range would be a way to clearly distance the 2 drugs from each other and reduce the risk of cannibalization. Additionally, SmithKline’s Tagamet is claiming treatment only, which would represent a competitive point of differentiation for Pepcid AC. Furthermore, a higher price for Pepcid AC would also represent more revenue for JJM. The final factor to be considered in this study, is the FDA’s record against prevention claims for OTC drugs. Traditionally, the FDA prefers education over medication for purposes of prevention. The reason cited is the risk of overmedication that OTC drugs could pose if used for prevention of a disease. Three courses of action are possible for JJM at this point: (a) continue working with the FDA to make the case for the prevention and treatment claim with no delays; (b) drop the prevention claim and go with the treatment only claim, increasing the chances of approval; (c) conduct more clinical trials to support the prevention and treatment claims, but then delay the process and risk falling behind in the race to approval. 3- Situation Analysis Strengths: Weaknesses: Pepcid AC safer than Tagamet (side effects when used with other drugs) Strong brand name. Convenient 1 tablet dosage Last longer than traditional antacids Indicated for both treatment and prevention of heartburning versus treatment only claim of Tagamet. Priced higher than traditional antacids Takes longer than traditional antacids to start acting Higher cost than Tagamet ( due to license fees) Opportunities: Threats: Move from 3rd place in Rx H2-blockers to 1st place in OTC Gain market share from traditional antacids competitors Growing market for H2-blockers. Tagamet, Zantac and Axid are also in the Rx-OTC switch â€Å"race† Harder to reverse position after first year of market entrance Cannibalization of prescription version of Pepcid Cannibalization of JJM’s antacid Mylanta. Considering that Pepcid AC will enter the OTC market to compete against both H2 receptor antagonists and regular antacids the analysis of SWOT template above is broken down in 2 parts: Pepcid AC x Other H2’s: The two main competitors running to enter first in the OTC marketplace are Pepcid and Tagamet. Comparing the Strengths and Weaknesses of both, Pepcid is safer to use with less restrictions when combined with other drugs, and is planning to claim both treatment and prevention of heartburn, versus only the treatment claim of Tagamet. On the other hand, Tagamet has a slightly lower total cost due to fees that JJM pays on Pepcid. Tagamet also has the advantage of having started earlier in the OTC transition process. The greatest opportunity for Pepcid against Tagamet, is the potential for gaining market share in the heartburn treatment market. . Another aspect to be highlighted, is the pricing strategy. If priced too low, Pepcid AC could present a significant cannibalization risk to Pepcid. Pepcid AC x Traditional antacids: Similar to its competitors in the H2 receptor antagonists market, JJM has strong participation in the traditional antacid smarket. JJM’s Mylanta is second only to SmithKline’s Tums . The main strength of Pepcid AC versus Mylanta and other traditional antacids is the indication for both prevention and treatment (depending on the option chosen). Another import POD for Pepcid AC is its convenient one tablet a day dosage. The opportunity for Pepcid AC is gaining market share from the other companies in the regular antacid market. Contrary to the opportunity, the risk is that Pepcid AC may cannibalize JJM’s Mylanta more than the other antacid brands. To avoid such possibility, JJM can develop a coordinated price and marketing strategies for Pepcid AC and Mylanta together. 4- Alternative Courses of Action The FDA advisory committee concluded that JJM’s clinical trials did not show adequate efficacy of Pepcid AC in either preventing or treating heartburn. As a consequence, there is considerable risk of rejection if JJM (a) proceeds with filing for regulatory approval for prevention and treatment claims with only the available clinical data. Although not a necessary rule, the FDA typically follows the advice rendered by their advisory committee. An alternative course of action is (b) to conduct additional clinical trials, in order to conclusively prove efficacy of Pepcid AC, for both treatment and prevention of heartburn. The caveat is that conducting additional trials could take an additional 6-9 months and jeopardize JJM’s goal of being first-to-market with an over-the counter H2 antagonist. In the case of H2 antagonists, being first-to-market is perceived as an important competitive advantage, enabling the first entrant to capture and retain the bulk of market share. Furthermore, being first-to-market could facilitate establishment of long-term customer loyalty relationships.. The aforementioned strategy merits consideration because it increases the likelihood of regulatory approval, while also maintaining the critical prevention claim (along with treatment), which JJM views as the key point of differentiation between Pepcid AC and Smithkiline’s Tagamet. However, this approach could be viewed as overly conservative since JJM has already conducted extensive clinical trials to prove efficacy of Pepcid AC for both treatment and prevention claims. The most significant study was dubbed the â€Å"provocative meal study,† in which participants were given a dose of Pepcid AC or placebo, prior to consumption of meals certain to induce heartburn. JJM strongly believed that these trials already provided sufficient evidence in support of the prevention claim and that additional trials were unnecessary. Therefore, conducting additional trials is a course of action to be pursued only as a contingency plan; that is,iif regulatory approval is not achieved with the currently available clinical data. Alternatively, JJM could move forward with (c) filing for regulatory approval for the treatment claim only. This approach is believed to be the easier path to achieving approval, when compared to pushing for approval of both treatment and prevention claims. Typically, the FDA views education as preferable over medication for prevention. Therefore, the agency may have inherent resistance to approving prevention claims. As such, the potential for regulatory dismissal of prevention claims is a valid concern. Seeking approval for the treatment claim only may appear to be the easier path to regulatory approval, but this is not an option without downside risk. In pursuing this alternative course of action, JJM would be sacrificing their key point of difference and diminishing the value proposition of the product. Furthermore, the lack of the prevention label could result in a significant reduction of market share and loss of revenue. Although results from BASESII market research support that treatment only claim is the most important one for product positioning, concept tests and focus groups support the notion that prevention and treatment together are more important. Additionally, heavy heartburn drug users, which account for the greatest potential usage and customer loyalty of Pepcid AC, strongly favor the treatment and prevention claims. Therefore, pursuing treatment only approval may not necessarily be the best path forwards. 5-Recommendations and Implementations Despite the recommendations of the FDA advisory committee, JJM should still take the case directly to the FDA and request approval of Pepcid AC for both the treatment and prevention claims. JJM has already conducted sufficient clinical studies supporting both indications. The regulatory expertise from the Merck side of the JJM partnership should be able to make a compelling case in the regulatory submission seeking approval for both treatment and prevention. Moreover, if the agency were to reject the filing, JJM could then opt for the contingency plan discussed in the Alternative Courses of Action section (Option (b)). Seeking approval for both treatment and prevention is clearly the best course of action. The prevention claim will surely be an important point of differentiation that will enable JJM to retain leadership in the OTC market once the other H2 brands receive their own FDA approvals. JJM has performed extensive market research and has clearly segmented the market, targeted its customers and positioned their product well. Results from behavioral market research, reveal that most frequent antacid users are over the age of 50. Furthermore, descriptive market research concluded that users of both antacids and prescription H2 receptor antagonists and heavy antacid users comprised 62% of Pepcid AC’s predicted dollar volume. Therefore, a marketing campaign primarily targeting these users is a natural course of action. Furthermore, heavy users are likely to be early adopters of the new offering and tend to be opinion leaders, which will set the tone for the customer perceived value of the product Behavioral market research also revealed that patients that used prescription H2 receptors antagonists learned that with regular use of the medication they could prevent the onset of heartburn. This is a very important point for JJM, because they only have 13% of the prescription market (Table B). Therefore, a marketing campaign emphasizing not only treatment, but also prevention, for a convenient OTC drug, could lure a significant portion of the remaining 87% of the prescription antacid market over to Pepcid AC. Research utilizing focus groups also concluded that prevention and treatment would be the most attractive form of product positioning. Table B Market Share of Prescription Antacids Firm Prescription Antacid 1993 U. S. Sales ($ millions) Market Share (%) Glaxo Zantac 1,694 56 SmithKline Tagamet 528 17 Merk Pepcid 387 13 Lilly Axid 271 9 All Others NA 150 5 Analysis of the traditional antacid market is also critical. In this case, JJM’s Mylanta has 16% market share of a $745 million market. Being first to market, with a superior product effective for a longer period of time than the competition and that also prevents heartburn, could be a significant catalyst for gaining market share from the traditional antacids. Results from BASES II tests established that 30% of prescription H2 receptor antagonists users and 28%-34% of antacid users would switch to Pepcid AC. An aggressive advertising campaign could certainly improve these numbers. In conclusion, considering JJM’s relative small market share in both OTC and prescription markets for heartburn the potential gains with the switch of Pepcid, even after discounting possible cannibalization are significant. For example, 30% of the $3,030 million prescription market equates to $909 million. Subtracting $118 million, for the 13% of the Pepcid market share, yields $791 million of potential revenue from cannibalization of the prescription drug market alone. Of course, the price of Pepcid AC will be significantly lower, but even a 75% price reduction equates to nearly $200 million in revenue. Similarly, assuming only 28% of the $745 million OTC antacid market switches to Pepcid Ac would equate to nearly $209 million in revenue. Subtracting approximately $33 million for Mylanta’s market share and we are left with potential revenues of $176 million. These examples also alleviate any concerns regarding sales cannibalization of prescription Pepcid and OTC Mylanta (Table E). Though cannibalization will certainly occur, it will be tremendously offset by cannibalization of competing brands. To further support the notion that product positioning should be centered on treatment and prevention claims, JJM carefully determined their competitive frame of reference. As such, JJM assumed that Tagamet, the other leading product in the race for OTC H2 receptor antagonists, would position itself primarily based on it’s effectiveness in controlling stomach acid (i. e. treatment only claim) and also leverage it’s heritage as the original H2 receptor antagonist. Treatment, the most important attribute of Tagamet, would therefore be the point of parity in product positioning of Pepcid AC, since both JJM’s Pepcid AC and Tagamet could be considered equivalently effective for the treatment of excess stomach acid. As previously discussed, the other key attribute for JJM’s Pepcid AC positioning is the prevention claim. This would be the critical point of difference to cement Pepcid AC as the superior product, able to garner greater customer perceived value. Prescription heritage of Tagamet is not deemed as important since it scored near the bottom of concept tests. With respect to pricing, the $2. 95 price tag for Pepcid AC would render it competitive with antacids. However, a $3. 29 price would be appropriate due to its improved efficacy and prevention claim. Conjoint analysis evaluating the $2. 95, $3. 29 and $3. 95 pricing should be performed prior to launch. 6- Conclusion In addition to all considerations in the recommendations section of this report, it’s important to emphasize that â€Å"the race† to be first in H2-receptor antagonists market doesn’t end with the FDA approval. The next phase, the product launch, is of equal importance. JJM will need to orchestrate a sequence of activities to make sure they are the first to hit the market with Pepcid AC. Ramping up production and shipments to distribution centers and retailers is a massive effort. In JJM’s factories, all necessary resources and materials will need to be available for the first batches of Pepcid AC to be produced. Logistics will need to be in-line to move the drug from the factories through the supply chain and fill the drugstores shelves ahead of any possible competitor. Additionally, a national advertising campaign will need to be standing by to go on-air immediately after FDA approval. Creating public awareness and having customers actually trying out the new product when it arrives will be key to obtaining leadership in this market. For that, JJM will also need to train and incentivize the pharmacists. Prior to the switch, Pepcid is still a prescription only drug, and as such, education and incentives goes towards doctors’ offices and hospitals. With the switch, pharmacists will be the first line of contact with the new customers; they will need to receive all necessary information to be able to explain to customers the advantages of OTC H2-receptor antagonist compared to traditional antacids on the market. Following the initial campaign, JJM will need to adjust the advertising strategy to focus on adoption and retention. According to the studies conducted by JJM, the fight for market share will be concentrated mainly in the first year after FDA approval. Therefore, marketing campaigns during this period will need to be massive.