Monday, September 30, 2019

Cook Chill

1. 0 INTRODUCTION Cook-chill and cook-freeze food productions are ways of producing foods that have been employed by many different organizations determining from the types of foods and services that the particular organization offers. These types of cooking methods work hand-in-hand with the kitchen designs. Kitchen design refers to the layout of kitchen equipment and positions of the working sections to produce foods that meet the needs of customers and thereby reaching the goals of the establishment. 2. 0 A KITCHENA kitchen is described as a building or a room in a building that has been specialized for cooking purposes only. Different establishments have their own types of kitchens with different designs that serve different purposes. Some kitchens are designed special for catering customers on transit such as Fast Food restaurants but some kitchens has to cater for a specific group of people using a specific type of service, thereby it has to have the right number of employees w ho will do the job and enough equipment to save time as well as energy. 3. 0 KITCHEN PLANSThere are different types of kitchen plans that have a specific purpose of operations. If a kitchen is designed for a particular way of production, it has also specific type of equipment available in that kitchen plan. There are different types of kitchen plan some of them are discussed below. 3. 1 Corridor kitchen A corridor type of kitchen, the appliances, cabinets and counter space are arranged on two facing walls. If the room is not too long, this can be an efficient kitchen. However, if both ends of the kitchen have doors, traffic through may create confusion. 3. 2 U-shaped kitchenThis type of kitchen is usually considered to be the best type of a kitchen which has the best work triangle because of its convenient arrangement and short walking space between appliances. It has a determined floor space and accommodates a determined number of workers. 3. 3 L-shaped kitchen This type of a kitch en creates an easy-to-use work triangle. If the kitchen space is large enough, an eating center can be included. This is the situation whereby customers serve themselves. 3. 4Center type of kitchen This type of kitchen is the most common type of kitchens that most establishments have employed.The working area is on the center as the name suggests but does not provide enough space. Figure 3. 4. 1 3. 5 Island type of kitchen All the necessary equipment in the kitchen is placed back to back in the middle of the working area. This type of setting requires an adequate space to allow an easy flow and enough space between the equipment for easy cleaning and to avoid creating dark areas that introduces insects. 4. 0 WORK CENTERS A work center is an area that focuses on a particular type of work activity such as preparation or cooking.Includes appliances and work space and that the necessary equipment is stored within for easy reach as depicted in figure 4. 0. Figure 4. 0. A chef preparing a meal from a working center. 4. 1 Refrigerator-freezer center * The refrigerator and the freezer have space next to them to use when loading or unloading foods. * A storage space is needed for items used to package food for refrigeration. * Storage space for items used when serving refrigerated or frozen foods. 4. 2 Range center(gas/ electric range) * Cabinet storage for foods used at this center. Storage space for pots, pans, cooking tools such as ladles, wooden spoons and pot handlers. 4. 3 Sink or cleanup center * Appliances such as dish washers and food waste disposers are found in these centers * Adequate space for stacking dishes 4. 4 Mixing center * Can be between two centers * Has several electrical outlets * Storage space for measuring, mixing and baking equipment and all the necessary ingredients 5. 0 TYPES OF KITCHEN ORGANISATIONS 5. 1 Conventional kitchen * They are suitable for small establishments They have fixed menus and banquets operating on rational basis * All dep artments are grouped together in blocks * Preparation and finishing are done in the same area 5. 2 Combined preparation and finishing kitchen * They are suitable for medium sized hotels or establishments * Preparation and finishing are done in the same section * In principle, preparation and finishing are totally or partially separated depending on the establishment 5. 3 Separate preparation and finishing kitchen( Satellite kitchen) * They are suitable for large establishments Preparation and finishing are done in separate sections; mis-en-place and the other one for finishing * Each section consists of one housing all the equipment for preparation of the dishes * Usually, they have no ranges, frying pans or steamed jacketed pots. Instead, they have grills, microwaves and Bain Marie. * 5. 4 Convenience food kitchen * a system of interest to the establishment that has no preparation kitchen but purchases only convenience foods * deals with the finishing of foods only and mostly canne d foods * require refrigerated and dry storage areasIn selection of these types of kitchen, consideration should be given * numbers of meals to prepared at each meal period * types of services * customer prices * system for serving meals * serving times for hot and cold meals 6. 0 FACTORS THAT DETERMINE THE DESIGN OF A KITCHEN 1. Service requirement: Management should be well aware of a food service objectives before planning its kitchen, type of menu and target numbers, etc usually determine the type of equipment to be in the kitchen 2. Space availability: One that maximizes space wage or that ensures efficient usage of space. 3.Amount of capital expenditure: Have an accurate idea of spending available since finances will often determine the overall design and acceptability. 4. Type of equipment available: Space provision for ventilation and power supply of the kitchen. 5. Use of convenience foods: Design of a fast food kitchen using ready-made foods will be different from that of a kitchen serving †¦. la carte menu. 7. 0 FOOD HYGIENE A number of factors may affect the quality and wholesomeness of food. * The premises, equipment and conditions in which it is stored * The care taken by food handlers to avoid contamination from other foods.Large scale handling of food by staff not trained or conscious of hygiene requirements is a major source of infection. In such circumstances, cross-contamination can easily occur. * Allocation of the kitchen * The number of people passing by the kitchen * Contact of cooked food with raw foods or utensils and surfaces contaminated by raw foods is likely to become infected * Segmentation of cooking sections may contaminate high risk foods such as cooked poultry and meat (pies, soups, stock) milk, creams, custards, shellfish, eggs, cooked rice and dairy products. 8. COOK-CHILL FOOD PRODUCTION Cook-chill, according to John Campbell,David Fasket and Victor Ceserani (2008), is a catering system based on the normal preparation and cooking of food followed by rapid chilling storage in controlled low-temperature conditions above freezing point, 0–3? , and subsequently reheating immediately before consumption. The chilled food is regenerated in finishing kitchens, which require low capital investment and minimum staff. Almost any food can be cook-chilled provided that the correct methods are used during the preparation. 8. 1 THE COOK-CHILL PROCESS The food should be cooked sufficiently to ensure destruction of any pathogenic microorganisms. The process must begin as soon as possible after completion of the cooking and portioning processes, within 30 minutes of leaving the cooker. * The food should be to 3? within a period of 90 minutes. Most pathogenic organisms will not grow below 7, while a temperature below 3? is required to reduce growth of spoilageand to achieve the required storage life. However, slow growth of spoilageorganisms does take place at these temperatures and for this reason storage l ife cannotbe greater than five days. The food should be stored at a temperature of 0–3? and should be distributed under such controlled conditions that any rise intemperature of the food during distribution is kept to a minimum. For both safety and palatability the reheating (regeneration) of the food should followimmediately upon the removal of the food from chilled conditions and should raise thetemperature to a level of at least 70?. The food should be consumed as soon as possible and not more than two hours afterreheating. Food not intended for reheating should be consumed as soon as convenientand within two hours of removal from storage.It is essential that unconsumed reheatedfood is discarded. 9. 0 COOK-FREEZE FOOD PRODUCTION This type of food production is similar with the cook-chill system of production. The only difference is temperature conditions that the foods are placed in. 10. 0 COOK-CHILL AND COOK-FREEZE FOOD PRODUCTION RELATING TO THE KITCHEN DESIGN A type of a kitchen determines what type of food production system to be employed. 1. A conventional type of kitchen produces fast foods therefore, it can adopt the cook-chill production system.It would be easy to reheat the foods in the microwave than to start preparing; beef stew takes long to prepare and for a fast food restaurant, time matters most. 2. L-shaped kitchen creates a large working area which also creates room for the cook-chill or cook-freeze equipment since the equipment is big and requires a larger space e. g the blast chillers and deep freezers as in figure 10. 0. Figure 3. 0. A chef preparing a meal using a Blast chiller. 3. A u-shaped kitchen, though considered to be the best, would not be the best type of kitchen for a cook-freeze type of production.The equipment might need one corner of the room which cannot be possible because the corners might be occupied with other equipment. 4. A corridor kitchen might also not be suitable for cook-chill systems because the equipmen t is placed in the sides of the kitchen which creates much space for an easy work flow but little storage and working areas. 11. 0 Conclusion Cook-chill and cook-freeze are food production methods that are commonly used nowadays to produce food in most of the hospitality establishments worldwide.The cook-chilling and cook-freezing areas in the kitchen are compatible parts of the kitchen plan and design, therefore, for these areas to exists in a kitchen it has to be planned at first when building the kitchen. BIBLIOGRAPHY Fellows, PJ(2000). Food Processing Technology: Principles And Practice 2nd ed. Woodhead: Cambridge Food Standards Agency(2002). The Composition Of Foods,6th ed. MacCance: Cambridge Kowtaluk, H. & Kopan, OA. (1990). Food For Today(4th ed). McGraw-Hill:New York

Saturday, September 28, 2019

Affirmative Action Debate and Economics

Yuching Lin ECON 395 The Affirmative Action Debate Affirmative Action has recently become the center of a major public debate in the United States, which has led to the emergence of numerous studies on its efficiency, costs, and benefits. The Civil Rights Act of 1964 and Equal Employment Opportunity Commission ended wage and employment discrimination based on gender and race, significantly decreasing the gap between minorities and non-minorities. Minorities made major progress from the 1960s up through the early 1970s due to Affirmative Action (Jones, Jr. 1985). However, for the past few decades, the progress that minorities have made in terms of income, employment and education has largely stagnated. California, Michigan, Nebraska, and Washington State have recently banned racial advantage in employment and college admissions, and Proposition 209 of California has disallowed the preferential treatment of minorities, with opponents of Affirmative Action lobbying for more widespread b ans on similar policies while supporters argue fiercely against the removal of Affirmative Action policies.As can be seen, Affirmative Action’s status in the United States now is very dynamic due to shifting court decisions and policy plans. Additionally, returns to education have been increasing in recent decades, and as a result, income inequality has also increased – the growing demand for highly skilled workers (workers with high levels of post-secondary education) and the stagnancy of American education (with the added fact that high quality colleges have become even higher quality and even more selective while lower tier colleges have decreased in quality) has led to ever-increasing wages for the highly skilled.This â€Å"Skill Biased Technological Change† has led to a widening income gap between the rich and the poor. Naturally, this considerably affects black and Hispanic minorities, who are more likely than non-minorities to be part of the working class or below the poverty line, which raises the stakes in the debate on Affirmative Action. Today, both sides on the debate can bolster their arguments with evidence provided by economic and social research on the policies. But there are additional questions to be answered – is Affirmative Action justified on moral grounds? Can we balance economic efficiency with equity?Is Affirmative Action the best policy for addressing racial inequalities? Do policies that increase diversity result in positive externalities such as reduced prejudice and indirect benefits beyond education and career success? One will discover that, after thorough analysis of research concerning Affirmative Action, it is still difficult to form a definitive conclusion on the results of the policies. Nevertheless, there is much to learn from the research that has been conducted as of present, and one can now better steer research in a direction that will uncover the real benefits and shortcomings of Affirmative Action.To begin with, the debate on the efficiency of Affirmative Action policies is still very much unsettled. Opponents claim that they actually result in several negative effects on the minorities the policies are intended to benefit, arguing that minority students admitted into overly competitive programs are more likely to drop out than mismatched non-minority students due to the increased competition, which would actually worsen the income gap since black income has been shown to decrease even more than white income after dropping out of college (Loury, 1995).A proposed â€Å"stigma hypothesis† suggests that â€Å"preferential treatment perpetuates the impression of inferiority† while simultaneously lowering incentives for high academic effort from minorities (Murray, 1994). And yet another underperformance hypothesis by Steele (1990) suggests that blacks’ academic performance suffers when they are aware that normal standards are lowered in order to accomm odate them.However, there exists no research with strong, conclusive results that support these claims – in contrast, the â€Å"race difference in graduation rates is no larger at the most selective institutions,† and blacks have been shown to benefit from the increased admission to selective universities (Holzer and Neumark, 2000). A study by Cortes (2010) on the Texas Top 10% Plan claims that the banning of Affirmative Action actually increases college dropout rates for minorities and finds that the mismatch hypothesis is inaccurate.Because quantifying the economic benefits of Affirmative Action is exceedingly complicated, comparing the total costs and benefits of the policy becomes exceptionally difficult for policy-makers. Holzer and Neumark (2000) note that university admissions policies are not necessarily economically efficient to begin with, which further complicates matters since researchers do not have a definitive point of efficiency to which they can compar e the results of Affirmative Action.Administrative costs and externalities must also be factored into the models, when economists have not yet even managed to create a viable model for the effects of Affirmative Action policies. For example, minority students in medical school are less likely to reach as high a level of expertise as non-minorities, but they are more likely to treat minority patients, generating a positive externality (Holzer and Neumark, 2000). Another point made in the debate on Affirmative Action is that diversity may improve the educational quality of a university.Many educators believe that diversity in colleges is inherently beneficial – students can learn from other students whose experiences and backgrounds give them a whole different set of views and capabilities. Several studies have actually correlated increased diversity in student bodies with improvements in issues such as racial prejudice and bias, although results vary depending on study design, extent and type of diversity, and the issue of interest (Bowman, 2010).Even so, other researchers are likely to reject such evidence since variables such as attitudes, inter-racial relations, and even school quality (as a result of increased diversity) are difficult to define and accurately quantify (Holzer and Neumark, 2000). Research has often demonstrated very unclear results, usually only weakly suggesting a few conclusions while also providing sometimes conflicting conclusions; one study on the effects of diverse student bodies found that there was no effect on post-college earnings, an increase in satisfaction with college experiences, and a decrease in community service (Hinrichs, 2011).Interestingly, a recent study in college admissions based on merit, race, and legacy suggests that the removal of race-preferential treatment may actually decrease the ability of the student body since colleges may be inclined to increase legacy-based admissions due to the current economic cl imate (Li and Weisman, 2011). However, they do also propose that there is a possibility that colleges would need to eliminate all preferences barring merit in order to produce the most-able student body.Overall, it is generally agreed that more accurate methods for measuring school quality and the quality of school inputs are necessary if more conclusive results on education differentials’ effects on unobserved skills, attitudes, and racial income inequality are to be found. While it is without a doubt that Affirmative Action increases admission and employment rates for minorities, there is much speculation as to whether it is the most effective policy. Researchers are not confident yet of how different variables interact to affect income, employment rates, and other indicators of success.As a result, one will often find seemingly conflicting data as exemplified by Card and Krueger’s (1992) findings that â€Å"5-20% of the post 1960 black gains were due to improved sc hool quality† while Smith and Welch (1989) claimed that 20-25% of black gains were a result of improved school quantity, which they asserted should be the focus of educational policies. Moreover, it is well known that employers often discriminate based on race, whether purposefully or unconsciously, and that this significantly impacts labor market outcomes.Affirmative Action can only do so much to address employment discrimination – in some models, the issue of negative racial stereotypes can be exacerbated by the application of such policies (Holzer and Neumark, 2000). In addition, some suggest targeting education inequalities in primary and secondary education as opposed to implementing race-preferential treatment in post-secondary education as a more efficient and equitable policy, although efforts such as NCLB have only slightly improved primary and secondary education inequalities.The primary and secondary education quality in the United States is in dire need of i mprovement already; high school graduation rates have stagnated, and school quality is falling behind when compared to the educational systems of other developed countries. There also exist other complicated variables that need to be addressed in order to close the income gaps between minorities and non-minorities such as the inheritance of learning abilities/behavior, ghetto culture, and the underclass (Jencks, 1993). As one can expect, research concerning controversial issues often suffers from researcher bias.Economic research typically entails numerous variables and methods in order to reach conclusions, and more often than not, results are varied and ambiguous, especially in this particular branch concerning Affirmative Action. It is a simple matter to selectively exclude certain results in order to make it appear as if a research study conclusively supports or rejects Affirmative Action as a beneficial policy, should one wish to do so. Literature searches can quickly turn up v ery obviously biased articles with weak evidence and unreasonable claims.Furthermore, past research has indicated that the types of models used in studies can have a significant impact on the results, further obfuscating the conclusions on the effectiveness of the policies. In fact, as of present, research on the policies has resulted in mostly ambiguous conclusions, although it is of my opinion that studies in general very slightly lean toward suggesting that Affirmative Action is beneficial as a whole – research studies that conclusively reject Affirmative Action as a viable policy are few and far between, and it is even difficult to find studies that demonstrate significant negative effects.In order to obtain more accurate data, researchers would optimally be able to create experimental studies with control and treatment groups, but this is very unlikely to occur due to the unethical properties of such studies. Clearly, it has become even more imperative that researchers i mprove models on the efficiency of Affirmative Action policies in order to obtain more reliable data to demonstrate with greater confidence the effects of the policy. Of course, these topics only cover the economic justifications for Affirmative Action, which is clearly also a matter of social justice.There is an endless cycle of philosophy-based debates on the policy – should we as a society aim for equity or efficiency? If we are willing to sacrifice some efficiency for equity, how much is optimal? There is also the question of whether society has a responsibility to â€Å"[remedy] the present effects of past discrimination,† which begins another entire debate about the responsibility of people today for wrongs committed by ancestors (Jones Jr. , 1985).Some also believe that Affirmative Action devalues the achievements of minorities since credit may be given to race-preferential treatment instead of to the individuals who accomplish those feats; this can lead to furt her racial prejudice and bias, possibly encouraging the continuation of racial discrimination in job employment. There are also suggestions that perhaps policies ought to focus on helping the part of the minority population with low socioeconomic status so as to avoid situations in which wealthy black students may be given preferential treatment over more qualified white students living below the poverty line.The debate over Affirmative Action for women is also just as controversial, for women also suffer from income inequality, but they still receive the same advantages that their male siblings receive. Therefore, one can argue that women are not put at a disadvantage early on in life and that they should not require preferential treatment in admissions or employment. In fact, women who benefit from Affirmative Action are much less likely than racial minorities to be lower-qualified and less-skilled, suggesting that implementing a preferential treatment policy may be inequitable in this case.Even more abstract of an argument is the claim that men and women will never reach perfect equality since they inherently desire different careers and hold differing aspirations for lifetime achievements. As can be seen, there is literally a myriad of issues that factor into the debate on Affirmative Action. The field of research concerning the topic is beset by the typical problems plaguing education and economics research – education quality and educational inputs to schools are difficult to measure, as are the effects on income, employment rates, job characteristics, etc. The lack of experimental data lends no help.The effects of nationally implemented legislation are difficult to track and quantify because of differences over time and across states (Altonji and Blank, 1999). Compiling data at a national level is also vastly time-consuming and challenging. Though neither side has conclusive evidence that supports their argument, studies that reveal tentative con clusions on the effects of Affirmative Action policies are emerging, and models are slowly approaching real-world utility. As research continues, hopefully the accumulation of data and models will allow researchers to uncover the true effects of Affirmative Action.References Altonji, Joseph G. and Rebecca M. Blank. 1999. â€Å"Race and Gender in the Labor Market. † In Handbook of Labor Economics, edited by Orley Ashenfelter and David E. Card, 3143- 3259. San Diego: Elsevier B. V. Bowman, Nicholas A. 2010. â€Å"College Diversity Experiences and Cognitive Development: A Meta Analysis. † Review of Educational Research 80(1):4-33. Card, David and Alan B. Krueger. 1992. â€Å"School quality and black-white relative earnings: a direct assessment. † Quarterly Journal of Economics 107:151-200. Cortes, Kalena E. 2010. â€Å"Do Bans on Affirmative Action Hurt Minority Students?Evidence from the Texas Top 10% Plan. † Economics of Education Review 29(6):1110-1124. D ong, Li and Dennis L. Weisman. â€Å"Why Preferences in College Admissions May Yield a More- Able Student Body. † Economics of Education Review 30(4):724-728. Hinrichs, Peter. 2011. â€Å"The Effects of Attending a Diverse College. † Economics of Education Review 30(2):332-241. Holzer, Harry and David Neumark. 2000. â€Å"Assessing Affirmative Action,† Journal of Economic Literature 38:483-568. Jencks, C. 1993. Rethinking Social Policy: Race, Poverty, and the Underclass. New York: HarperPerennial. Jones Jr. , James E. 1985. The Genesis and Present Status of Affirmative Action in Employment: Economic, Legal, and Political Realities. † Iowa Law Review 70:901-923. Loury, Linda D. and David Garman. 1995. â€Å"College Selectivity and Earnings. † Journal of Labor Economics 13:289-208. Murray, Charles. 1994. â€Å"Affirmative Racism,† In Debating Affirmative Action: Race, Gender, Ethnicity, and the Politics of Inclusion, edited by Nicolaus Mills. 1 91-208. New York: Delta. Steele, Shelby. 1990. The Content of Our Character. New York: St. Martin’s Press. Smith, James P. and Finish Welch. 1989. â€Å"Black Economic Progress after Myrdal. † Journal of Economic Literature 27:519-564.

Friday, September 27, 2019

Legal Assignment for Architecture Case Study Example | Topics and Well Written Essays - 1500 words

Legal Assignment for Architecture - Case Study Example A covenant is an agreement written under seal commonly used with reference to sales or leases of land. Covenants are privately negotiated and agreed not necessarily obligating both parties involved, but a promise to perform or give something to the other party. The legal document is part of the deed of ownership and represents a binding contract between two parties: the covenantor (the person bound to perform the promise or stipulation) and the covenantee (the person on whose favour it is made). In this instance, Robert is the covenantor, one doing the favour, and John is the covenantee, for whom the favour is intended. Favour at this instance is the retaining wall in Robert's property. Covenants are subdivided into numerous classes. Restrictive covenant is a covenant which restricts the use of land, which is binding not only upon the current owner, but also upon the future owners of the land. A covenant real runs with the land and descends to the heir and is also transferred to a purchaser. If the original owner of Robert's land covenanted to maintain the wall, then the nature of the covenant is a restrictive covenant. Robert is obliged by the restrictive covenant to maintain the retaining wall that already exists. Trying to get a covenant enforced is risky and can be expensive and time consuming. If a covenant is breached, John should check if enforcement is possible by going through the courts. He would need to prove if his rights are affected. If Robert fails to maintain the retaining wall on his property and risks to damage or cause detriment to John's property, John must show that the disrepair of the retaining wall amounts to liability on Robert's part. 2. Does John have a right of support from the wall What is the nature and effect of such a right John has a right of lateral support from the wall. Since John's property or soil had not been altered and it was in its natural state, thereby allowing for the right of lateral support. Lateral support is the right of a landowner expected from the neighbouring property against any slippage, cave-in, landslide, flood, etc. In the case of the two owners, in addition to separating lands, the retaining wall serves to retain the earth. Although John's land is at a much lower level than Robert's, he still has the right of lateral support since the right signifies maintaining the land in its natural position. Since Robert owns the retaining wall, it is his obligation to maintain the wall to prevent soil from slippage upon the adjoining property and that the damage or impending damage is due to the natural state of his property, at this instance, the growing roots of a tree in his property. This impending damage on another's natural state of property requires Robert to give due John's right of lateral support. In this instance where John's property is in danger of land slippage from Robert's property due to a caving or damaged retaining wall that has been covenanted by Robert, caused by the growing trees in Robert's property, John may seek lateral support. 3. Can the tree be chopped down if it is subject to a tree preservation order John needs to request from

Theories explaining homelessness Essay Example | Topics and Well Written Essays - 750 words

Theories explaining homelessness - Essay Example In this paper the issue of homelessness will be discussed thoroughly by having a look at national homelessness factsheets with an emphasis on sociological theories to help understand the reasons of this social problem. Homelessness is a condition caused by numerous factors that affect people of various demographics in the United States. Among the most affected groups are African-American and White people, especially single men and children aged between 5 to18 years are seriously affected by this condition. Also, there are groups of people who are victims of domestic violence such as women who leave their homes to get rid of violence. Among the homeless people, there are mentally ill individuals and drug addicts as well, who do not have accommodation facilities. The main reason of homelessness is poverty and unemployment, offering people little opportunities to earn and pay for their housing facilities. However, there are other reasons as well such as domestic violence, mental illness, drug addiction, lack of affordable residence and little public assistance. According to a study, 10% of homeless people are suffering from mental disorders (Drake, Osher & Wallach, 1991). There is need to take certai n steps, for instance, controlling mental illnesses and addiction, poverty and unemployment and violence by providing healthcare, psychological and financial assistance to people in need in order to deal with this issue. Classical sociological theorists Emile Durkheim and Max Weber had given their views on society and its issues decades ago in the form of grand sociological theories. Durkheim gave a concept of anomie, a state of being normal in a society that could be applied to the social problem of homelessness that America is facing today. According to him social problems enter in a society when it lacks moral unity and norms and values are unable to create

Thursday, September 26, 2019

MHE510, Occupational Health and Safety, Mod 5 SLP Essay

MHE510, Occupational Health and Safety, Mod 5 SLP - Essay Example In addition, nonsmokers that are exposed to second hand smoke increase their risk of heart disease by 25% and their risk of lung cancer by 20% (Zellers, et. a. , 2007). There have been studies indicating the difference in air quality and the side effects of second hand smoke. Air filtering does not work so to have them smoke in a separate room is not helpful as smoking in a part of the restaurant with smokers on the other side of a wall does not work. All of these claims are serious and can lead to long term workmans compensation damages. Allowing smoking and protecting non-smokers is a very expensive policy. The CDC tell us that secondhand smoke has 250 toxic chemicals including 50 that cause cancer. Conventional air cleaning systems do not filter the gases in second hand smoke and current heating, ventilating, and air conditioning systems do not stop the exposure and negative pressure smoking rooms do not work either (cdc.gov) The only policy that will work for a work situation is to establish a smoke free workplace. This disallows any smoking on campus. Some states have begun to require that all public places be smoke free. This is a difficult and often unpopular policy and there may be some costs attached but the cost of a suit from a very ill employee because of lung cancer from secondhand smoke could be devastating financially. For those reasons the recommendation must be a smoke free policy. CDC provides full kits to allow companies to become smoke free. They are available at no charge and would help to set this new policy up in such a way as to be successful. There will also need to be a steering committee to put the process together and provide everything needed. A kick-off date will need to be chosen and advertising will need to be done ahead of time so that visitor are aware before they arrive on the doorstep. Then administration must support it. In conclusion, second hand smoke is a killer. It is often more detrimental to the non-smoker

Wednesday, September 25, 2019

Thesis essay on digital media Example | Topics and Well Written Essays - 750 words

Thesis on digital media - Essay Example Youths are consistent users of digital media than any other group in the society. It is considered a natural aspect that youths are so much integrated into digital media. For instance, almost every young person possesses cell phone. Most of these youths do not necessary make calls or send short messages. They only want to feel part of the larger population that has identified itself with the current trends of digital media. On the same note, youths unlike any other groups of the society uses phones among other digital media devices in the most diverse way. In this regard, it is right to assert that digital media is a habit for the youth. The youth has assimilated and embraced digital media quite fast. This has made them used to it, especially in the social aspects of life. Family negotiations are a critical and integral factor in the digital media-youth interaction. It has been deemed to act as a control or autonomy tool in the youth-family affairs. Digital media has made communication, data processing, sending and receiving of information easy. With this, parents and other family members can monitor their youth using digital media devices. On the other hand, the youth counts on the digital media devices as a source of freedom. Although easier and cheap means of communication are provided, ascertaining the credibility of the information given may be challenging. The youth take this advantage to lie about their whereabouts, thereby achieving their autonomy purposes. Digital media has created a whole new world for the youth. The youth expresses a crystalline awareness on digital media. In the social networks context, digital media is not only prosthesis of their body, but of their social competence as well (Aspray 153). When direct communication fails to materialize, then digital media finds its way in account for the loop. Uncomfortable and embarrassing moments for the youth have been

Tuesday, September 24, 2019

Smart Meters Privacy Concerns & Solutions Dissertation

Smart Meters Privacy Concerns & Solutions - Dissertation Example The prime concern of â€Å"Compromise of Consumer Privacy† and â€Å"Safety Concern† due to the detailed statistical and itemised electricity usage will also be discussed in detail. Ways and means of countering this severe draw back will also be discussed and a practical solution will be proposed. A new idea on making Smart Meters more consumer friendly and robust in terms of protecting consumer privacy will be discussed as well Acknowledgements Table of Contents Abstract 2 Acknowledgements 3 Table of Contents 4 1. Introduction 7 2. A detailed technological assessment on the power and sophistication of the Smart Meter Device 9 3. A detailed exploration of the features, functionalities and modes of the Smart Meter 9 4. A practical point of view at the Smart Meter from the consumer’s stand point and evaluation of the benefits of such Smart Meters e.g.: reduction of hassles, detailed consumption statistics, reduction in the risk of Meter tampering, hooking, and ille gal manipulation of meters to register incorrect readings etc. 9 5. A comparison with primitive Electricity usage measuring devices like the analogue disk meters etc. 9 6. An understanding of the â€Å"Privacy† issue of consumers. Loss of confidentiality of sensitive information. Unauthorised access of consumer specific information and utilisation in mal practices 9 8. Literature Review 10 9. Research Methodology 13 10. Findings and Analysis 15 11. Discussion 20 12. Conclusions and Recommendations 23 13. Personal Reflection 28 14. Bibliography 30 15. Appendices 31 1. Introduction 1.1. Project Rationale In this project of detailed study of Smart Meters, we will be researching the Smart Meter as a consumer product, analysing its pros and cons, identifying its benefits and also the ethical dilemma surrounding the usage of Smart Meters to register Electricity Consumption in homes in cities and in different countries. We will be studying the issues surrounding the recent release a nd distribution of Smart Meters, the threats to Privacy and the exposure of Sensitive Personal Data. The Smart Meter will be placed against the back drop of two conflicting scenarios, one promoting and encouraging its usage due to increased consumer convenience and awareness on electricity usage. The other scenario being the generation of statistical data on electricity consumption that has the power and potential to personally identify individuals, intrude into their daily life patterns and over all life style. The Personal Reflection and the conclusion will talk about the Smart Meters over emphasising accuracy and threatening consumer privacy and safety by exposing detailed usage statistics to unauthorised access. We will be concluding with numerous possibilities to reduce the sensitivity of information generated by the smart meter, thereby retaining its advanced technology and caring for the consumer’s privacy by safeguarding consumer interests as well. 1.2. Project Aim an d Objectives 1.2.1. Project Aim The aim of the project is as follows is to conduct a thorough research on the features and functionalities of Smart Meters and make an informed and careful decision on dealing with this sophisticated gadget. A thorough risk assessment followed by a detailed discussion of the advantages, disadvantages, risks and threats is the aim of the project. 1.2.2. Project Objectives Objectives of the Project are: 2. A detailed technological

Monday, September 23, 2019

Corporate Governance, Corporate Social Responsibility and being Essay

Corporate Governance, Corporate Social Responsibility and being ethical are essential ingredients for a business to be succes - Essay Example These businesses were in evaluation on expertise, power, healthcare and aerospace and protecting against industries. Sample outcomes displayed that European businesses are displaying larger firm promise to business communal blame as Australia companies. Nevertheless, Australia businesses are more dedicated to enterprise ethics has a structure and governance programs in place. Enron and the Sarbanes-Oxley have directed to ethics and governance of American businesses for the rush in latest years (Zadek 2008). Despite all the efforts of the European CSR from 1999 onwards, an expanding number of Europeans, the pace of persons, and those Australia businesses are underneath mean in the workout of their responsibilities in the direction of society. Recent business scandals in Europe and the United States will probable have a contradictory influence on buyer insights of Australia and European businesses in key markets. In supplement, an increasing number of Europeans and Americans to grade b usinesses in other adversities in fulfilling their responsibilities to humanity, possibly mirroring the political and financial stress between the Australia and European Union. ... ereas course, Parmalat, the Italian dairy anxiety whose proprietors have "defrauded investors, encompassing $ 1.5 billion more dollars from Australia investors (Cowan, 2004). So two years before, Europeans have contended that Enron proved the superiority of European enterprise heritage, is now broadly identified that deception in detail, can occur anywhere. But while the structure of European enterprise is altering - going not only with consider to local integration, and to clear the American perform, as the capital of the economic market as well numerous Europeans are still slow to get necessary alterations in how enterprise is regulated (Voien 2000). After Parmalat, the European administration eventually realize that they should proceed in the direction of implementation means need a higher more open finances and American business. Analysis Since 1970, the Company has directed enterprise ethics in diverse modes, encompassing the establishment of befitting programs and managers, in supplement to the Council at the grade of ethics managing assemblies, ciphers of perform groundwork and distribution of standards?statements, business communal blame of chartering managers and teaching programs of all kinds. As happenings of latest years in the Australia and Europe have shown that these efforts, regrettably, has not stopped the Australia and European businesses to enlist in unethical behaviours that lead to larger financial scandals (Bradshaw & Vogal 2001). The outcome is expanded force on Australia and European businesses and authorities to supply more organized governance and ethics programs to double-check that enterprises are more accountable to the humanity in which they operate (Amber & Wilson 2005). Examples of dubious demeanour extends for some workers and managers

Sunday, September 22, 2019

The United States Presidential Election Essay Example for Free

The United States Presidential Election Essay The recent victory of Barack Obama in the United States Presidential Election of 2008 is one of the biggest issues among many other big events that have occurred in America this year. Early on, even during the presidential primaries, the topic of whether or not then Presidential candidate Obama would be influential enough to win the electoral vote was an issue. It was undoubted that the Democratic Party enjoyed the support of the popular vote yet the bigger was issue was whether or not they would be able to gain enough of the Electoral College votes. The events and debates leading up to the election have also brought about a controversial election topic which is that of Electoral College reform. This brief discourse shall tackle the issue of Electoral College Reform and whether or not a different method for the selection of the United States president should be used. To arrive at a better understanding of the issue, it is important to first discuss the pros and cons of the current system. As such, there will be a discussion into the historical antecedents and the impact of such electoral changes. Finally, this discourse will highlight the reasons why the method used by Maine and Nebraska is more effective. Electoral College Reform In 1888, the election of Benjamin Harrison was controversial because for the first time in American History the winner of the Electoral College lost the popular vote. This raised a lot of questions because it was thought to symbolize the lack of support for the President by the American public. It would also mean that the mandate of the public was not for the President and it would make it very difficult to pass reforms and laws. This is because the United States has a democratic government. It must be remembered that a democratic system is often mistakenly characterized as the rule of the majority (Davenport 380). While there is usually a large group of middle class individuals that comprise this democratic system, it does not necessarily mean that the majority rule. This only means that the majority usually elects the representative to office but the hallmark of any democracy is still the protection of the rights of the minority. As such, the system of Electoral College ensures that while the majority may influence the outcome of an election it also makes sure the people are able to freely select their representation at all levels, most especially at the level of the presidency. Most recently, the issue of Electoral College Reform once again made the headlines as President George W. Bush narrowly won the electoral vote but had lost the popular vote. This victory raised a lot of questions and even the former Senator Hillary Clinton called for a constitutional amendment that would allow for the selection of the President to be through popular vote and not the electoral vote. It is clear that this measure did not pass yet it certainly did bring to the consciousness of the public the necessity of reviewing the issue of Electoral College Reform. Pros and Cons It stands to reason why such a system, which has been in place for so long and been the reason for the election of several Presidents, should be replaced with an antiquated election style. In order to arrive at a better understanding of the issue, it is important to first discuss the concerns of the Electoral College system. This is with the goal of proving that there is a need for Electoral College Reform and the adoption of the Congressional District method which is being used in Maine and Nebraska. The first reason is that it is argued that the system of Electoral College Reform does not accurately reflect the sentiment of the public. This is because as the examples in the recent elections have shown an individual can still be declared as the President of the United States even if he or she does not have the support of the majority or the popular vote. Under the Electoral College system, as long as the candidate enjoys the support of the States with the heaviest weight, he is virtually assured of victory. This is because of the fact that the framers of the Constitution sought for equality in representation and wanted to ensure that the voters in the sparsely populated states would have more weight as compared to those in the more densely populated states. This was done to ensure that those in smaller states would be able to be heard and have representation. This was also done to ensure that the majority would not overwhelm the minority. Another peculiar aspect of the Electoral System is that a candidate can win the election if he so happens to accumulate wins in many states that are relatively small in size even if his opponent is able to get larger wins in smaller states. The reason for this is that the Electoral System also features the â€Å"winner take all† system which rules that the proportion of the electoral vote often bears little resemblance to the popular vote. The winner take alls system also creates a few problems because the smaller number of electoral votes creates a discrepancy with the amount of random round offs which has been stated as an error. The presence of this error is also problematic because most studies have shown that it reduces voter turn out in areas and states where there are dominant parties. Seeing that they are not able to swing the vote either way, certain voters do not even attempt to exercise their right to vote for fear that it will all be for naught because of the numbers involved. Being the minority party in the state, they are aware that if their state uses the Electoral College System they will be unable to sway the outcome of the election either way. This is the reason why there is a voter apathy problem in most of these states. Perhaps the main problem with the Electoral College arises from the fact that in case there is no candidate who gets the majority of the electoral votes, it falls upon the House of Representatives to settle the issue. From this point alone, it is clear that there are several ramifications from such a method. The first is that the results of the election will not matter in any case because it will be the House of Representatives that will determine the winner. This may also be interpreted as a situation wherein the party which is able to get the most number of seats in Congress will decidedly determine the outcome of the election. The resulting partisan battle is no longer representative of the will of the public but rather the will of a representation that does not effectively carry the approval of the majority of the voting public. The second reason is because it often results in horse trading in order to determine the next President of the United States. Since the House of Representatives is composed of several congressmen, it boils down to trading votes for concessions when determining the leadership of the country. This totally disregards the electoral process and in a way can be construed as frustrating the will of the voting public. As such, the entire electoral process boils down to which side is able to gain more support for their candidate and which candidate is able to give more concessions to the parties. This is similar to events that transpired in 1824 and 1876. The leadership of a country boils down to tax measures and funding instead of what it is really supposed to be about, the will of the voting public. This system also creates another problem by limiting the choices of the public. Since it has been determined that the party which has a better network generally wins, the Electoral System leaves out the alternative parties such as the liberals. In the past elections, it has been shown that it is only the Republican or the Democratic Party that is able to field the candidates. While the liberal party has shown more strength, the realistic candidates are generally from the two party systems which in effect limit the choices of the voting public. One of the other problems in an Electoral College comes from the fact that there is a necessity for primary elections. This means that the long drawn out process of Elections is really simply just a process that was already more or less decided when the primaries were cast. As the recent elections have shown, when there are primary elections it already becomes clear where the popular votes are and where the electoral strengths are. This also results in a frustration in the vote and the will of the electorate because having the primaries means that in most cases the votes of the last states does not really matter, except when it is a really close election but in most cases this is not often the case. The reason for this is that the results of the elections have been decided because most of the candidates have conceded even before all the votes have been cast due to the projections of certain candidates having insurmountable lead. The end impact of all of these negative aspects is quite simple. It means that the electoral process is frustrated and the right of the voters to be heard and make a difference with their vote is disregarded. This is based on the fact that when the electoral process is determined before it is over it sends the signal that the vote of those who have not voted is no longer necessary in determining the results. This may, in the long run, lead to voter apathy and lack of support from certain states. From the point of view of the electoral process, it is not a real electoral process because it does not allow for the real sentiments of the public to be reflected. Through the electoral process, the foundations of a democratic system become all the more evident. The right of the people to vote and to choose who they will elect as president is one of the important foundations of a representative democratic system (Lijphart 139). Without these foundations in place, there would be no way to ensure that the rights of the people are protected. The right to select a representative ensures that everyone has a chance to be heard. In the wise words of Abraham Lincoln, Democracy is the government of the people, by the people and for the people.

Saturday, September 21, 2019

Music, Culture and Value of Music in a Digital Future

Music, Culture and Value of Music in a Digital Future The uses of Music, Culture and the value of ‘free’ Music In a Digital future â€Å"We’ve lost a whole generation of kids, who grew up downloading free music from the web and cannot fathom paying for it† Abstract The past ten years have witnessed an enormous growth of musicology within the music and entertainment industry with questions concerning musical meaning and the extent to which it’s informed by cultural experience and socially derived knowledge. Groundbreaking developments are increasingly encouraging the demand for new products and platforms from consumer markets that have grown up downloading music knowing no better than to find their entertainment through the internet with the illusion that it is free. This dissertation looks at the early forms and purposes of music up to present day, factors threatening the music industry and what has affected it over recent years. The increased use of the internet, cheap software equipment and other technological art forms, have changed the way we sell, listen to and buy new music. I want to investigate what effects will this have on the industry in the future and what does this mean for artists and the way music is created and valued. Introduction Introduction will contextualise the central theme and notion of the work and describe my motivation and intensions. I will focus the introduction on the chapters individually. ‘The industry has been hanging off the edge for some time’ (McQuinvey, J. Date. P.). Chapter 1 – Talk about the development of technological devices related to new formats, and the main purposes of music up until today. Chapter 2 – Talk about the technological developments which have an effect on the way we buy and listen to music. New devices and gadgets are demanding newer ways to attain music and how we consume new music. Chapter 3 – Talk a little about the different types of people using and making music, how this is affecting record labels and what will happen in the future. As the development and discovery of technology grows and grows from early dates to present day, enabling more and more possiblilities†¦. Cultures and social activities are affected by radical technological change†¦Ã¢â‚¬ ¦. ‘One of the primary proponents of this categorization was William. F. Ogburn. He argued that in most cases it is the sequence of technology that causes social change’ Over the past however many years, digital downloads have been fought against buy, major labels, causing decades of copyright and pirate copying of music films and entertainment mediums. 2005 onwards†¦ Today in 2008 the subject of digital downloading and the internet is being redefined and recognised by the major record labels hoping to create a future with easy access to new music quickly and cheaply. Starting new web sites for downloads etc. People want faster choices and ways to attain their entertainment. The fast changing cultures within society Growth of music technology Internet sites- Amazon competing with major companies to sell a wider range of products as more and more people are buying online instead of using high street shops and other retailers. Modes and categories inherited from the past no longer seem to fit today’s reality, experienced by a new generation. Chapter 1 – (Progression of early forms of music, formats and purposes) For centuries music has been the biggest form of entertainment within households, pubs, clubs and events ever since the recording of sound, but since the early days of music the purposes and the means to consume music has grown considerably up to the 20th century forcing formats, technology and the music industry to change with time. This chapter will outline the progression of technology associated with music and its means of use in relation to new entertainment. When ‘Bartolomeo Cristofori’ became the inventor of the piano, identified as a stringed keyboard instrument with mechanically operated rebounding hammers, Cristofori’s invention became a success and around 1922, a survey was carried out which shows that the piano was the most popular instrument used in over 25% of the average household. Along with many other musical instruments dated before and after the piano, instruments were used for enjoyment and entertainment and at times for families and friends who would gather together to play and sing songs on special occasions. When the very first phonograph was introduced by Thomas Edison around 1878 and the Edison Speaking Phonograph Company was established. The phonograph would be treated in the same way as a piano or organ as families would again sit around and listen to records or family stories within the home but Edison realised the opportunities he had created with his invention. Edison’s invention enabled the possibilities of using the phonograph to perhaps dictate a letter, dictate books for the blind, make family recordings of their voices, music boxes and toys, clocks that announce the time, and a connection with the telephone company to record conversations. In 1857 Frenchman, Leon Scott de Martinville was the first to have invented his documented phonoautograph machine which was able to record sound waves but only created a visual analogue of the waves, until around 20 years later when Thomas Edison allowed two innovators to re-develop the later phonograph which became the gramophone. The gramophone used disk shaped materials to record onto which produced better recording quality and a longer playback time. American inventor Emile Berliner then created a process which allowed the sound tracing to be etched side-to-side in a spiral onto a zinc disk, this master would then be electroplated to create a negative which would then be used to stamp duplicate copies onto vulcanized rubber (and later shellac), a process which would change the means of music forever, a process now known as the mass reproduction of musical entertainment. The process to record, duplicate and play back music opened endless forms of entertainment and the industry were set to take the world by storm, selling records and making profits to consumers. The gramophone quickly outsold and overtook the phonograph and by the end of World War 1 the disc had become the dominant commercial recording format. A technological development which has had a major impact on music in this century is sound recording. Over the past seventy years the concert audience has been transformed from musical amateurs to a large number of potential buyers. The birth of sound recording started as a mechanical process, and with the exception of the Telegraphone in 1899 this process remained until the 1920s when a group of groundbreaking inventions in the field of electronics revolutionized sound recording and the young recording industry. Sound transducers were introduced such as microphones and loudspeakers and a few various electronic devices were made for the purpose of amplification and modifications of early electrical sound signals resulting in the mixing desk. Inevitably, over time all these components and inventions have had an affect on the way musician’s record music, the uses of music and the growing demands of the music consumers to attain music. These electronic inventions created the means for growth and development within the music industry opening a wide range of possibilities for the recording process. Although many inventions and ideas were yet to be discovered, early music and its uses had progressed from a means of confined entertainment within the household to a possible, world wide product with which Emile Berliner’s early duplication process played a large part when it came to distribution and portability of recorded music. As time passed, increasingly people were able to buy recorded music which would be played on a gramophone wherever it may be. Emile Berliner realized the market wanted a range of music which can be bought, stored and played at any given point, the money earning potential would be high and with the importance of his discoveries, decided to start ‘his very own’ brand of recorded music which up until today, with the changes and the new strains on the industry has been extremely successful with the famous dog and gramophone design of ‘His Masters Voice’ (HMV). Music was now, not only being used just for enjoyment or purely for entertainment but was now, being recorded, duplicated and distributed to consumers around the world who are able to replay music over and over and enjoy their collections when ever and most importantly where ever. The next major progression concerning music which would increase the needs of high quality equipment was the introduction of descriptive and respective music tracks within film. The years 1920-1928 were known to be the golden age of silent movies. Early movies were accompanied by music scores containing pieces usually played by an organist, pianist or an orchestra depending on the class of the theatre. Sound tracks however were introduced to cinema audiences around 1926-1927 even though technology to add sound to film was discovered in 1911 it took another 15 years or so to be introduced and implemented into movie productions. The use of music within film during this particular period was predominantly used to raise the attraction of early movie productions which would change forever after the opening of Pandora’s Box in 1927 and the increase of technical achievements which led Al-Jolson to ad-lib a few spoken words in ‘The Jazz Singer’. Recorded music for films then after became extremely successful within the movie industry and over the next few years Warner Bros. took control of this area (now a multi-billion pound industry) by producing ten all-talking films with accompanied sound tracks and scores leaving the silent movies on the shelf. This production process increasingly outlined the importance of having good quality sound systems to playback the music and sounds on film. Music will always essentially be a huge form of entertainment in many ways but now different music was being used for more reasons than originally supposed. With the on going growth of equipment and technology music became a money making product after the discovery of sound recording, music began to be used to compliment or help describe a visual performance rather than being an individual form of entertainment, it was now coinciding with other art forms and was boosting the popularity and profits of associated productions. With the discovery of magnetic media music will be promoted on a mass worldwide scale and allow the public and potential music buyers to listen to broadcasts over the air. The first radio broadcast which involved music was said to be in 1906 at Brant Rock MA, when Fessenden played his violin, sang a song and read a few verses from a bible into his wireless telephone on Christmas Eve 1906. It was classed as a broadcast because it was designed for more than one listener and was pre-announced rather than a one to one conversation. 1920 saw the first licensed radio broadcast, as Frank Conrad’s company was asked to go on air on a regular basis to send out music to the listeners and would sell radios to pay for the service. Radios were advertised in local newspapers to households and within a few years there were hundreds of stations entertaining thousands of people who had bought or built their own receivers. It was no longer, that an audience had to sit in their own home and manually operate a gramophone, no need to necessarily buy records from HMV and will no longer need to worry about play back time of records as the public could listen to the radio everyday, and tune in to their favorite radio stations free of charge. Growing factors underlined the importance of good quality equipment to further the success of music and the portability of music, which led to new discoveries of early formats and storage devices such as magnetic tape machines, cassettes tapes/players to audio cd’s. After the rubber and shellac records, which were the primary recording medium at the time, a new means for recording came about in 1934/35 when Joseph Begun of Germany built the first magnetic tape machine which was used for mobile radio broadcasting before creating the first consumer tape recorder which provided the ‘3M Company’ with a billion dollar industry. Magnetic tape machines became very popular storage and recording devices in radio stations and recording studios as they offer higher quality recording and longer continuous playback of recorded material, the most beneficial aspect of the invention of tape was its portability. Eventually two track tape machines were introduced which extended recording possibilities within the studio but magnetic tape was never used commercially by consumers until the release of the first compact audio-cassette tape in 1963 by The Phillips Company of the Netherlands. With a cheap and easy recording medium such as the cassette tape combined with a cassette tape player, It could be argued that this sparked the ever destructive and ongoing battle of music piracy. Taperecorders/players were sold with built in radios as standard and by the touch of a button it was possible torecord sounds and music straight from the radio. After Phillips had patented the cassette tape in 1965 and decided to make it free of charge all over the world, companies then started to design new portable recorders and players to compliment the compact size of the cassette tape. One of the popular models of tape players was the Sony Pressman which was a monaural tape recorder released in 1977. The next year in 1978 Sony founder and chief advisor Masaru Ibuka requested the general manger of the Tape Recorder Business Division to start work on a stereo based model of the earlier Sony Pressman which birthed the Sony TPS-L2 headphone stereo Walkman in 1979 that would completely chan ge the way consumers listen to music. Theyll take it everywhere with them, and they wont care about record functions. If we put a playback-only headphone stereo like this on the market, itll be a hit. What made the Sony Walkman such a big hit was the portability that it was offering to its consumers. Ever since the invention of the piano/organ, phonograph, gramophone, record players, wireless recorders and receivers, although, all mediums allowed the consumer to listen to music in various ways, none of which actually enabled the listener to become portable, ‘on the move’ to be able to listen to their material literally wherever they wanted. Recording and listening to music from this point onwards almost became a hobby for a generation of people who would listen to the radio to try and catch their favourite song to record to tape, allowing them to repeatedly replay the material and start a collection of stored music. Many types of storage formats have been introduced by this point but very few which are truly beneficial to the storage and quality of music mediums. After the magnetic media such as: wire, core memory, drum, card, tape, disk and OM disk came many floppy disk formats which played a great part in early computing storage formats. Different versions of optical mediums were introduced ‘optic data disk’ coming before Sony proposed a standard for the compact disk (CD) in 1980 but was followed by formats such as: DVD, HD-DVD, holographic, Blu-ray DVD and developments with OM disks. The introduction of optical mediums saw Sony’s standard CD to hit the very top in high quality recording and storage mediums. CD-R’s are a ‘write once, read many’ optical medium (WORM) which is a recordable version of the CD and holds a high level of compatibility with standard CD readers unlike CD-RW’s which can be overwritten many times but has a lower compatibility level with CD readers and the disks are slightly more expensive. CD’s became the most popular medium of music and data storage due to its capacity and ease of recording but there is one flaw in its design as after a life span of around 2 years it’s possible for the CD’s data to degrade with time showing a coloured dye as a result. CD’s hold a standard capacity of 700Mb where as the introduction of DVD’s upped the capacity to 4.1 GB but was mostly associated with movies projects which contain much larger files. CD’s are still the highest quality recording/storage medium to attain or store music on outside of a computers hard drive but with newer, smaller compressed formats such as MP3 on the market the option of buying a CD compared to a smaller and cheaper alternative looks bleak with time, so we see the CD taking a backseat to let newer recording and storage devices into the scene. Chapter 2 (A demanding society) In today’s society where consumers are demanding faster, cheaper and easier methods of gaining entertainment, they also demand a new outlook towards devices, gadgets and components with which to view or listen to their product. This chapter underlines the changes of which new technology has an effect, they way society and subcultures are shaped by technology and how technology is forced to develop and become more advanced to meet the needs and perceptions of its consumers. In recent years the ‘compact disk’ has ended the forty year reign of the twelve inch LP, with which came consequences for production, distribution and marketing, and in turn disks and tapes have been threatened by technologies which can deliver high quality sound via cable direct to potential consumers, eliminating the need for the already established pattern of product marketing and distribution. Although the invention of the phonograph and gramophones were considered important aspects in creating the a mass market for music and entertainment, â€Å"the record industry has been shaped by the need to cope with its volatile market so its established practices and institutions have been constantly undermined by technological innovations which not only offer new and better ways of doing things but, as we shall see, have generally had the effect of increasing the consumers choice at the expense of the industries ability to control its market†. (Scott, D. Martin, P. 1995 p.209) There are many important connections between technology, musical characteristics and social groups, and as it may be argued that the fundamental coordinates of a musical form are not determined by its social base, but each social group or subculture corresponds to certain acceptable genres. During the 1970’s and 1980’s the idea that the characteristics of a musical form could give life or influence to the social reality of a culture became more and more popular with incorporated sociological categories such as class, ethnicity and importantly age. â€Å"In 1987 John Shepard extended this type of analysis to gender, arguing that different voice types or timbres in popular music gave expression to different kinds of gender identities†. (Clayton, M. Herbert, T. Middleton, R. 2003, p. 7, p. 14) The 1990’s saw different factors concerning the cultural study of music and the analytical evidence with particular social categories such as, class, ethnicity, age, subculture and counterculture. This had been replaced with a more embracing and persistent concern with social identity. With the concept of youth culture, it’s assumed that teenagers share similar leisure interests and pursuits and were involved in some kind of revolt against their parents and elders. The arrival of youth culture is said to be linked with the growth and increased incomes of early working class youths which allowed greater spending power and the means to express their individual interests and styles which caused large markets to develop more interest for the youth culture, most notably resulting in music and fashion. It’s with particular music styles, genres and clothing styles and labels that predominantly place our identities within a culture or subculture, which technology helps shape and create aspirations in a similar way. â€Å"Teenage culture is a contradictory mixture of the authentic and the manufactured: it is an area of self-expression for the young and a lush grazing ground for the commercial providers†. (Hall, S. Whannel, P. 1964, p.) â€Å"The compressed file format known as MP3 is at the centre of debate towards file-sharing and digital downloading and is thought to be downgrading towards the level of audible quality in music. Yet the mp3 is also a cultural artefact, apsychoacoustic technology that literally playsits listeners. Being a container technology type for recorded sound, the mp3 proves that the quality of ‘portability’ is central to the history of auditory representation and shows that digital audio culture works according to logics somewhat dissimilar from digital visual culture†. (Jonathan Sterne, 2006. New Media and Society, Vol. 8, No. 5, 825-842 DOI: 10.1177/1461444806067737) Today’s young generation aren’t so aware of the historical factors and important issues which lead to the advances, demands and uses of audible quality music but more so, on the social aspects of consumption, portability and quantity of music. A spokes person for the Recording Industry Association of New Zealand, Terrance O’Neill-Joyce, argues that: â€Å"The problem is not with the actual technology of MP3, which he believes is being effectively used by many music producers, but rather the ineffective means of securing remuneration for artists. It’s a case of technology outstripping legislation and a lack of proper commercial framework being established as of yet† (Shuker. 2001 p. 65) MP3 is a technology encoding, recorded sound, so that it takes up less storage space than it would otherwise. The size of an MP3 file makes it practical to transfer high –quality music files over the internet and store them on a computers hard drive, where as CD quality tracks take longer to download and transfer. The MP3 file has become very popular as a way to distribute and access music even though there has been enormous debate over the economic and cultural implications of this new technology. For the typical music consumer the MP3 file is considered a blessing as anyone can access a wide range and varieties of music mostly for free as well as having the option to compile their own albums of single tracks from their favorite artists without having to acquire the whole album itself. For artists and producers the MP3 allows them to distribute their music possibly to a world wide audience without tackling the political processes and mediation of the music industry. For mainstream artists on major record labels the MP3 raises concerns of profit loss from consumers due to illegal downloads which are free of charge and easy to attain. On the other hand for strictly internet distributed music producers and publishers the MP3 opens up many opportunities for smaller, more innovative labels and companies. (Shuker. 2001, Pg 65) Each new medium of technology, communication or entertainment that’s introduced to a mainstream audience creates drastic changes towards the way in which we experience music, this also has implications for how we relate to and consume music. The changes and advances in technological recording equipment open, both constraints and opportunities relating to the organisation process and production of music, while the developments within musical instrumentation allow the emergence of ‘new sounds’. Most important of all, each new recording format or device used for transmission inevitably alters the previously established process of music production and consumption; they also raise questions about authorship and the legal status of music as a property and the ongoing battle with piracy and profit loss. Napster software was introduced in 1999, designed as a search engine, communication portal and file-sharing software that facilitated the sharing process by granting users access to all other Napster and the mp3 files they choose to share. Within a few months, transfers of music files using Napster reached millions per day, and at its peak, it was estimated that as many as sixty million people were using the site. â€Å"Whereas Napster requires users to first log onto a central server to access other users MP3 files, these newer networks allow direct user-to-user (P2P) connections involving multiple file types. These innovations expand the universe of file sharing activity and make it virtually impossibly to track users of the files they choose to share† (Garofalo, 2003 cited in Shuker, 2008 pg, 23) Digital distribution continuously threatened the music business and the control of music by the record companies. This method also lowers manufacturing and distribution costs while putting pressure on marketing and other aspects of the process. With the industry failing to stop illegal downloads and P2P (peer-to-peer) distribution of recorded music over the last five years, record labels have finally decided to adapt their business to suit the way its consumers get hold on their music. It’s becoming more and more apparent that albums and artists are making very little or no money in the music industry because of the lack of physical CD sales as the majority of money spent during the traditional production process goes towards many aspects such as the production, promotion, duplication and distribution of a product. Mainly within the music business P2P technologies are a positive means for consumers and creative artists because all costs of production, promotion, marketing and distribution are dramatically lowered. These new technologies and approaches to digital distribution means old and new artists are able to earn more profits through selling singles and albums through P2P networks as the production process costs a fraction of the album or single. Because they can charge less they earn and sell more which means more artists will benefit financially and the industries broad range of music will receive a wider market to distribute to. â€Å"It is easy to see that we are living in a time of rapid and radical social change, it is much less easy to come to terms with the fact that such change will, without doubt, affect the nature of those academic disciplines that both reflect our society and help to shape it† (Hawkes. 2003. p.7) The growing concern with the music industry today is focused heavily on the affects of digital downloads and the fall of physical album/record sales sold in high street music shops and online stores. The debate continues as sales in the US as well as the UK have fallen due to a number of factors involving the growth of technology and the way we consume our entertainment. According to recent industry researchers, figures show that today’s music industry (UK), has suffered a drop of up to 11% of record sales in 2007, but download sales boosted the singles market by nearly 30% last year as single sales increased from 67m in 2006 to 86.6m in 2007, up 29.3%. Despite there being best-selling albums from artists like Amy Winehouse and Leona Lewis, only 138.1 million albums were sold in 2007, compared with 154.7 million in 2006. Amy Winehouses Back to Black was the most popular album of 2007, with 1.85 million copies sold. Leona Lewis debut album Spirit came second, even though it was only released in November. Music industry body the British Phonographic Industry (BPI) put the 10.8% fall down to copyright theft and difficult retail conditions. Having the option of album unbundling is also a problem as consumers are able to select which tracks they want to download from each album, this means albums are not being sold as whole units and says a lot to the artists themselves about what their audience wants. Music Industry Analyst Michael McGuire of Gartner Research told Agency France-Press news agency: â€Å"It comes back to consumers being in complete control of their media experience†. Mr McGuire said fans were sending artists a message: â€Å"While you may have put a lot of thought into the sequence of the album, I only like these three songs†. BPI chief executive Geoff Taylor said: â€Å"The UK market has shown considerable resilience in recent years while global recorded music markets have declined.† Recording companies have a major influence on the music we listen to and shape what’s known as popular music within society. The term ‘popular music’ defies a precise straightforward definition and is usually over looked and the understanding of the term is taken for granted. To fully understand the term popular music it’s necessary to address the general field of popular culture within cultural studies. (See: Studying Popular Music Culture, Tim Wall). In this instance I refer the word popular music from the historical term for popular as the ‘ordinary people’, these days the meaning of the term has expanded, ‘all music is popular music’ meaning ‘music that is popular with someone’. â€Å"Young peoples musical activities whatever their cultural background or social position, rest on a substantial and sophisticated body of knowledge about popular music. Most young people have a clear understanding of its different genres, and an ability to hear and place sounds in terms of their histories, influences and sources. Young musicians and audiences have no hesitation about making and justifying judgements of meaning and value† (Willis. 1990: 59 cited in Shuker. p.98) The music industry is big business, and international multi-billion dollar enterprise historically centred in the United States with the United Kingdom making a significant artistic contribution to the industry and developing trends as well as the emergence of Japanese media technologies playing a major part in the music industry for its commercial designs of gadgets and devices. Recording companies are the most important part of the music industry and fall into two main groups: ‘the major’ international labels and the smaller ‘independent’ labels who’s structures and operating processes take on a similar role, blurring the distinctions between the two. These differences I will try to evaluate later on in chapter 3. The major labels are renowned for sourcing young talent, recording, promoting, marketing and distributing his/her music which has a powerful effect on the popular consumer, cultures and subcultures due to the image associated with that particular genre or style of music which is marketed, but its future is usually determined by the listener themselves. â€Å"For after the commercial power of the record companies has been recognised, after the persuasive sirens of the radio acknowledged, after the recommendations of the music press noted, it is finally those who buy the records, dance to the rhythm and live to the beat who demonstrate, despite the determined conditions of its production, the wider potential of pop† (Chambers, 1985: Introduction cited in Shuker 2001 p.23) Consumers are becoming less influenced by the major record labels with the help from the internet as consumers have more freedom to discover new genres and styles which are delivered in new ways. Record labels will always have a certain level of influence to its popular markets but now its the customer who decides on what they really like and want to listen to without feeling outside of the ‘popular music’ category. â€Å"I think there are many benefits for a musician not being signed to a label. I’ve seen first hand, from my experience at major labels, where they will sign up and coming artists b

Friday, September 20, 2019

Recording technology in music

Recording technology in music There have been dramatic advances in music technology; this has led to the use of technology in music being far greater and wider spread. There has been a dramatic impact on musicianship. The music recording process is defined as the act of composing, rehearsing the piece and physically making the recording, through to doing final editing and mastering to perfect the recording.The first music recording was made by Emile Berliner in 1884, this was recorded on a tin foil coated cylinder,but since then recording technology has vastly improved and editing techniques have became easier and more advanced. Through the transition from recording on wax cylinder, analogue tape, compact disc and digital download. These developments have had many effects, both positive and negative. There has been a lot of debate related to these developments. One of the main areas in which recording technology has affected musicianship is that advances in recording technology, thus the music that the end listener hears is hardy ever what the musician originally played, as recording and editing technology advances, this is becoming more apparent, and edited songs are drifting further away from what the musician had performed. This idea has created a lot of controversy with the listener, as they are unsure of what the artist is capable of, and what parts of the music have been manufactured artificially during the editing stage. Some people argue that these advances are an advantage as it means that the music that they listen to is of a higher quality, and has a higher musical accuracy and is free of performance errors, this is due to advances in technologies such as pitch correction, which allows for sections in the performance that are out of tune to be corrected, it also allows for new sections of a song to be written by adjusting the pit ch of a single note, to give samples of a range of other notes that can then be arranged to construct new melodies for the piece. In a similar faction sections of music can now be speeded up, slowed down and moved through time in order to allow a artists performance to be corrected if he or she falls out of time during there performance. Both of these techniques are product of improved recording technology, and a lot of listeners appreciate that music is of a higher quality and more enjoyable to listen to. In the contrary there is a group of people that think these improvements in technology are a disadvantage, due to the fact that the final edited version of a piece of music that the listener hears is often extremely different to the original recording that the musician actually performed. It is also criticized that how anyone can become a pop star due to the increased use of technology, which means that you dont have to have any musical knowledge or talent as any errors that are made can now be corrected. These opinions are reflected by Neko Case who says: Im not a perfect note hitter either but Im not going to cover it up with auto tune. Everybody uses it, too. I once asked a studio guy in Toronto, How many people dont use auto tune? and he said, You and Nelly Furtado are the only two people whove never used it in here. This shows that despite people like Nekos protest against the use of these editing techniques they are used in almost every piece of music nevertheless. These technologies have also given way to new genres of computer based music. Developments in recording technology, recording moving on to computer based systems has also largely effected musicianship, as now that most recording systems are computer based a lot of editing techniques are far simpler, and no edit or process is final, as all processes can be undone with the press of a button, where as in more traditional tape based recording systems undoing processes required a lot of manual work, or may even be unachievable. An advantage of this development is that music producers are able to experiment with different edits and processes, in order to find an outcome that they are satisfied with, if they try any processes that they are not happy with the piece can easily be reverted to its former state. When traditional tape recording methods where used editing involved cutting the tape up, then sticking it back together again, this means that it requires far more skill than using a modern computer systems. When using a traditional tape system undoing edits is f ar more difficult, so producers may be discouraged from experimenting with edits as it would require more work. In the other hand the same factors have disadvantages, such as the fact that producers may make edits on a traditional system that do not sound quite right, but it may be impossible or ineffective to rectify this, these small flaws would add character to the piece, which would not happen when using a modern computer systems, as any small imperfections can easily be removed without any bother. These facts about how recording and editing require far less skill is explained by David Wiliams and Peter Webster: When computers where large and delicate and required trained system operators, and when the first sound devices relied on complicated procedures to connect one element to another, you needed to know a great deal about technical things. This has all changed. This has given way to home recording which has enabled a lot more musicians to be able to produce there own pieces and become well known as home recording equipment is readily available. As recording technology has developed sound quality of recordings has improved dramatically. The initial recordings that where made on wax cylinders had a lot of hiss and crackling in the recording, a low signal to noise ratio, this made the music in the recording very hard to make out and the recording unpleasant to listen to, where as in modern recordings these noises have almost been eliminated. An example of this improvement in sound qualities is shown in this news article regarding new microphone technology: ‘new high-performance MEMS microphones enable dramatic advancements in sound quality.This has made listening to music far more pleasant. Which has improved musicianship as it has made it easier for musicians to listen to others performances and use them as inspirations for there own pieces. This has also allowed backing tracks to be produced to help musicians learn pieces which they can use to make there performances sound more realistic when they are playing solo or in small groups. As recording technology has developed and new distribution mediums have became available, music has became more portable, more widely available, and in general a higher quality. This has improved musicianship, as it has made it easier for musicians to listen to others performances, and use them to motivate and inspire themselves. In general this has improved musicianship but it has also had some detrimental effects such as sound quality being lost, particularly when music became digital in 1982 with the release of the compact discthat would replace the vinyl record. Many people argued that the use of digital data to represent audio led to a severe loss of quality in the music, as some of the sound is lost due to compression and digital sampling, which can give digital recordings a dull tone compared to the brighter tone or analogue recordings, one person that says this is Wayne Ellis Lee who says that: ‘vinyl has a warmer, fuller sound while CD has a digital, mechanical sound. In the contrary a digital compact disc recording can be played an infinite number of times without a loss in quality, but with an analogue recording some of the quality is lost, and you get a noticeable hiss if it is played repeatedly. Modern mp3 technology and internet downloading of music also has both positive and negative effects on musicianship. An negative effect is that due to internet downloading and peer to peer networks, it has became a lot easier for individuals to obtain free copies of an artists music illegally, this is expanded by Mark Katz who says: ‘While there is nothing illegal about MP3 and P2P technology per se, it is illegal to download or distribute digital files of copyrighted recordings with-out the permission of the copyright holder.These illegal downloads mean that the artist is not getting the royalties for there song that they deserve, and may be discouraged from producing there own music because it is not financially feasible for them. When music was distributed on a physical medium it was more difficult for listeners to obtain illegal copies of a recording. Consumers where also encouraged to purchase an artists product due to the fact that they where obtaining a physical copy of the song. Where with modern music downloading the listener gets a virtual file containing the music performance, the fact that the music is not in a physical form is also an advantage, as musicians and listeners are now able to have much larger music collections. The music retailers can offer a wider selection as they are not limited by the physical space needed to store the music. In conclusion there have been many advances in recording technology that have affected musicianship, most of these developments have made in easier for musicians to record, market and improve there performances, but these developments have also produced many disadvantages for both the musician and the listener.

Thursday, September 19, 2019

Traveling Women Ministers - Pushing Gender Boundaries :: British History

Traveling Women Ministers - Pushing Gender Boundaries Quaker women led lives that were very different than those of their contemporaries. These women had the opportunity to act as vigorous participants in their faith, not being driven from the supposed domain of men. George Fox, considered to be the founder of the Society of Friends, saw the ministry as a holy calling instead of a trade—making it naturally open to all. (Trueblood 31). Many women, including Barbara Blaugdone, heeded their call to the Ministry. Some of these women pushed the limits even farther than most, following their call to preach, wherever it led. These traveling ministers pushed even the limits of fellow Friends, often experiencing great oppression and ill regard by those outside of the faith. These women also chose to press other gender boundaries of the time. For this devoted group to be fully understood by the modern reader, they must be seen for what they were, radicals of their time. The behaviors exhibited by women like Blaugdone could easily have been, and often were, misconstrued. Acts like sleeping in a hog trough, sneaking onto someone’s property, and barging into the office of the Commander-in-Chief of the Army, all outlined in Blaugdone’s own narrative, were quite out of the ordinary. Often compared with the actions of a vagrant or a prostitute, these dealings were not seen for their religious affiliation but instead for their shocking deviance from the norm. So flustered by great differences between Quakers and others groups of the period, some individuals reacted violently. In one such instance Blaugdone along with Mary Prince were attacked by a knife yielding man, who did, in fact, succeed in sticking Blaugdone in the side (Blaugdone 10). On her mission to Dublin, Blaugdone was blamed for storms affecting their ship and was almost thrown overboard by her shipmates (Blaugdone 21). Katharine Evans and Sarah Chevers, fellow Quaker travelers, were thrown in prison during their mission to Alexandria, and were tortured psychologically by their captors (Davies 262). True, even stationary Quakers felt many assaults, but traveling women received the worst of it. Traveling female Quakers tested gender norms even more so than by preaching alone. Their ability to ignore the role of men as protectors, as well as owners, had no context in the minds of their contemporaries. Evans and Chevers greatly distressed their captors when they refused to give their affiliation to fathers or husbands.

Wednesday, September 18, 2019

Integration - Its Time for a Change Essay -- Current Events

Integration - It's Time for a Change Integration is definitely not working, or being used the way it was intended. Sure it's working in the sense that the schools are more diverse then when they were segregated, but integration is no where near where it should be and where it was intended to be. There are a few key points that demonstrate why integration isn't working like it should. The most obvious is the lack of integrations in a lot of schools, and the lack of diversity in our classes. Like we saw in the documentary, the schools are integrated, but the classes are segregated. Integration in some schools has led to tracking, which essentially is institutionalized racism- the opposite of what integration is for. Although we have programs like METCO, we don't have nearly enough. There isn't enough integration in our schools, and there isn't anything changing that. Programs that did have volunteer integration have been cancelled because of lack of funding, and support by the community and government. I think that districts themselves need to be more involved in tryi...

Tuesday, September 17, 2019

Formulation & Evalution of Atenolol Hcl Microemulsion for Ocular Administration

1. INTRODUCTION Objectives of the project: (a) Develop a formulation of Atenolol HCL microemulsion for ocular application to decrease IOP in case of glaucoma. (b) Improve the quality of patient’s life suffering from glaucoma. (c) Reduce the number of dosing per day. 1. 1 Eye â€Å"If a physician performed a major operation on a seignior (a nobleman) with a bronze lancet and has saved the seignior's life, or he opened the eye socket of a seignior with a bronze lancet and has saved the seignior's eye, he shall receive ten shekels of silver.But, if the physician in so doing has caused the seignior's death or has he destroyed the seignior's eye, they shall cut off his hand† the forgoing excerpts are from 282 laws of King Hammurabi's Code. The eye is unique in its therapeutic challenges. An efficient system, that of tears and tear drainage, which quickly eliminates drug solutions which makes topical delivery to the eye somewhat different from most other areas of the body. Pr eparations for the eye comprise a variety of different types of products; they may be solutions (eye drops or eyewashes), suspensions, or ointments.Any modern text on drug product design and evaluation must place into perspective the unique nature of the ophthalmic dosage form in general more specifically. It must consider that the bodily organ which, probably better than any other, serves as a model structure for the evaluation of drug activity, the eye. In no other organ can the practitioner, without surgical or mechanical interaction, so well observe the activity of the drug being administered.Most ocular structures can be readily viewed from cornea to retina and in doing so; any signs of ocular or systemic disease can be detected long before sight-threatening or certain health threatening disease states become intractable. Behind the relative straightforward composition nature of ophthalmic solutions and ointments, however, like many physicochemical parameters which affect drug stability, safety and efficacy as they do most other products.Additionally, specialized dosage forms such as parenteral type ophthalmic solutions for intraocular, subtenons, and retrobulbar use; suspensions for insoluble substances such as hydrocortisone; and solids for reconstitution such as ecothiophate iodide and tetracycline, all present the drug product designer with composition and manufacturing procedure challenges in the development of pharmaceuticals. Opthalmic products, like most others in the medical armamentarium, are undergoing a process termed optimization.New modes of delivering a drug to the eye are being actively explored ranging from a solid hydrophobic device which is inserted into the ophthalmic cul-de-sac, to conventionally applied dosage forms which, due to their formulation characteristics markedly increase the drug residence time in the orbit of the eye, thus providing drug for absorption for prolonged period of time and reducing the frequency with which a gi ven drug product must be administered [1]. Ocular diseases are mainly treated topically by application of drug solutions administered as eye drops.These conventional dosage forms account for 90% of the available ophthalmic formulations. This can be due to the simplicity and convenience of such dosage forms [2]. It is often assumed that drugs administered topically to the eye are rapidly and totally absorbed and are available to the desirable site in the globe of the eye to exert their therapeutic effect. Indeed, this is generally not the case. When a quantity of topical ophthalmic dosage form is applied to the eye, generally to the lower cul-de-sac, several factors immediately begin to affect the availability of the drug contained in that quantity of the dosage form.Upon application of 1 to 2 drops of a sterile ophthalmic solution, there are many factors, which will participate in the removal of the applied drops from the lower cul-de-sac [5]. The first factor effecting drug availab ility is that the loss of the drug from the palpebral fissure. This takes place by spillage of the drug from the eye and its removal via nasolacrimal apparatus. The normal volume of tears in human eye is estimated to be approximately 7  µl, and if blinking does not occur the human eye can accommodate a volume of 30 III without spillage from palpebral fissure.With an estimated drop volume of  µl, 70% of the administered volume of 2 drops can be seen to expel from the eye by overflow. If blinking occurs, the residual volume of lO  µl indicates that 90% of the administer volume of two drops will be expelled. The second factor is the drainage of the administered drop via the nasolacrimal system into the gastrointestinal tract which begins immediately upon instillation. This takes place when reflex tearing causes the volume of the fluid in the palpebral fissure to exceed the normal lacrimal volume of 7 – 10  µl.Fig (l) indicates the pathways for this drainage. A third mech anism of drug loss from the lacrimal fluid is systemic absorption through the conjunctiva of the eye. The conjunctiva is a thin, vascularized membrane that lines the inner surface of the eyelids and covers the anterior part of the sclera. Due to the relative leakiness of the membrane, rich blood flow and large surface area, conjunctival uptake of a topically applied drug from the tear fluids is typically an order of magnitude greater than corneal uptake [3]. Figure (1): The pathways for drainage of drug from the eye [2]In competition with the three foregoing drug removal from the palpebral fissure is the transcorneal absorption of drug, the cornea is an avascular body and, with the percorneal tear film first refracting mechanism operant in the physiological process of sight. It is composed of lipophilic epithelium, Bowman's membrane, hydrophilic stroma, Descement's membrane and lipophilic endothelium. Drugs penetrate across the corneal epithelium via the transcellular or paracellula r pathway. Lipophilic drugs prefer the transcellular route.Hydrophilic drugs penetrate primarily through the paracellular pathway which involves passive or altered diffusion through intercellular spaces, for most topically applied drugs, passive diffusion along their concentration gradient, either transcellularly or paracellularly, is the main permeation mechanism across the cornea [6]. Physicochemical drug properties, such as lipophilicity, solubility, molecular size and shape and degree of ionization affect the route and rate of permeation in cornea [3]. 1. 2 Microemulsions Oil and water are immiscible. They separate into two phases when mixed, each saturated with traces of the other component [7].An attempt to combine the two phases requires energy input to establish water-oil contacts that would replace the water-water and oil-oil contacts. The interfacial tension between bulk oil and water can be as high as 30- dynes/cm [8]. To overcome this, surfactants can be used. Surfactant s are surface-active molecules. They contain water-loving (hydrophilic) and oil-loving (lipophilic) moieties [9]. Because of this characteristic, they tend to adsorb at the water-oil interface. If enough surfactant molecules are present, they align and create an interface between the water and the oil by decreasing the interfacial tension [8].An emulsion is formed, when a small amount of an appropriate surfactant is mechanically agitated with the oil and water. This results in a two-phase dispersion where one phase exists as droplets coated by surfactant that is dispersed throughout the continuous, other phase. These emulsions are milky or turbid in appearance due to the fact that the droplet sizes range from 0. 1 to 1 micron in diameter [9]. As a general rule, the type of surfactant used in the system determines which phase is continuous. If the surfactant is hydrophilic, thenoil will be emulsified in droplets throughout a continuous water phase.The opposite is true for more lipoph ilic surfactants. Water will be emulsified in droplets that are dispersed throughout a continuous oil phase in this case [10]. Emulsions are kinetically stable, but are ultimately thermodynamically unstable. Over time, they will begin to separate back into their two phases. The droplets will merge together, and the dispersed phase will sediment (cream) [9]. At this point, they degrade back into bulk phases of pure oil and pure water with some of the surfactant dissolved in preferentially in one of the two [8]. 1. 2. Characteristics of Microemulsions If a surfactant that possesses balanced hydrophilic and lipophilic properties is used in the right concentration, a different oil and water system will be produced. The system is still an emulsion, but exhibits some characteristics that are different from the milky emulsions discussed previously. These new systems are called â€Å"microemulsions†. The interfacial tension between phases, amount of energy required for formation, dro plet sizes and visual appearance are only a few of the differences seen when comparing emulsions to microemulsions.Microemulsions are in many respects small-scale emulsions. They are fragile systems in the sense that certain surfactants in specific concentrations are needed for microemulsion formation [11]. In simplest form, they are a mixture of oil, water and a surfactant. The surfactant, in this case, generates an ultra-low free energy per unit of interfacial area between the two phases (103mN/m) which results from a precise balance between thehydrophilic and lipophilic nature of the surfactant and large oil-to-water interfacial areas.These ultra-low free energies allow thermodynamically stable equilibrium phases to exist, which require only gentle mixing to form [12]. This increased surface area would ultimately influence the transport properties of a drug [14]. The free energy of the system is minimized by the compensation of surface energy by dispersion entropy. The flexible i nterfacial film results in droplet sizes that fall in a range of 10-100 nm in diameter for microemulsion systems. Although these systems are formed spontaneously, the driving forces are small and may possibly take time to reach equilibrium [14].This is a dynamic process. There is diffusion of molecules within the microstructures and there are fluctuations in the curvature of the surfactant film. These droplets diffuse through the continuous phase while kinetics of the collision, merging and separation of droplets occur [13, 10]. With droplet sizes in the nanometer range, microemulsions are optically transparent and are considered to be solutions. They are homogeneous on a macroscopic scale, but are heterogeneous on a molecular scale [7]. Microemulsions usually exhibit low viscosities and Newtonian flow characteristics.Their flow will remain constant when subjected to a variety of shear rates. Bicontinuous formulations may show some non-Newtonian flow and plasticity [16]. Microemulsi on viscosity is close to that of water, even at high droplet concentrations. The microstructure is constantly changing, making these very dynamic systems with reversible droplet coalescence [15]. To study the different properties of microemulsions, a variety of techniques are usually employed. Light scattering, x-ray diffraction, ultracentrifugation, electrical conductivity, and viscosity measurements have been widely used [20].These are only a few of themany techniques used to characterize microemulsions. Instrumentation and their application to microemulsions will be discussed in a later chapter. 1. 2. 2 Types of Microemulsions Microemulsions are thermodynamically stable, but are only found under carefully defined conditions [3]. One way to characterize these systems is by whether the domains are in droplets or continuous [22]. Characterizing the systems in this way results in three types of microemulsions: oil-in-water (o/w), water-in-oil (w/o), and bicontinuous.Generally, one wo uld assume that whichever phase was a larger volume would be the continuous phase, but this is not always the case. Figure (2): Possible nanostructures present within microemulsions: a) o/w; b) o/w, and c) Bicontinuous [22] Oil-in-water microemulsions are droplets of oil surrounded by a surfactant (and possibly co-surfactant) film that forms the internal phase distributed in water, which is the continuous phase. This type of microemulsion generally has a larger interaction volume than the w/o microemulsions [23].The monolayer of surfactant forms the interfacial film that is oriented in a â€Å"positive† curve, where the polar head-groups face the continuous water phase and the lipophilic tails face into the oil droplets [17]. The o/w systems are interesting because they enable a hydrophobic drug to be more soluble in an aqueous based system, by solubilizing it in the internal oil droplets. Most drugs tend to favor small/medium molecular volume oils as opposed to hydrocarbon o ils due to the polarity of the poorly water-soluble drugs. An o/w drug delivery tends to be straightforward when compared to w/o microemulsions.This is the result of the droplet structure of o/w microemulsions being retained on dilution with the biological aqueous phase [23]. Water-in-oil microemulsions are made up of droplets of water surrounded by an oil continuous phase. These are generally known as â€Å"reverse-micelles†, where the polar headgroups of the surfactant are facing into the droplets of water with the fatty acid tails facing into the oil phase. This type of droplet is usually seen when the volume fraction of water is low, although the type of surfactant impacts this as well.A w/o microemulsion used orally or parenterally may be destabilized by the aqueous biological system. The biological system increases the phase volume of the internal phase, eventually leading to a â€Å"percolation phenomenon† where phase separation or phase inversion occurs [23]. O ral peptide delivery in w/o microemulsions is still used, however, The hydrophilic peptides can be easily incorporated into the water internal phase and are more protected from enzymatic proteolysis by the continuous oil phase than other oral dosage forms [17, 18].A w/o microemulsion is best employed, though, in situations where dilution by the aqueous phase is unlikely, such as intramuscular injection or transdermal delivery [17, 19]. When the amount of water and oil present are similar, a bicontinuousmicroemulsion system may result. In this case, both water and oil exist as a continuous phase. Irregular channels of oil and water are intertwined, resulting in what looks like a â€Å"sponge-phase† [ 20, 21]. Transitions from o/w to w/o microemulsions may pass through this bicontinuous state.Bicontinuousmicroemulsions, as mentioned before, may show non-Newtonian flow and plasticity. These properties make them especially useful for topical delivery of drugs or for intravenous a dministration, where upon dilution with aqueous biological fluids form an o/w microemulsion [25]. 1. 2. 3 Preparation of Microemulsion The preparation of microemulsions requires the determination of the existence range of microemulsions, which can be determined by visual observation of various mixtures of surfactant, co-surfactant, oily phase, and aqueous phase reported in a phase diagram.Two techniques are presented in the literature, each of them resulting in microemulsions: (1)†Exact† process by autoemulsification; (2) process based on supply of energy. 1. 2. 3. 1 Autoemulsification: Due to the spontaneous formation of the microemulsions, they can be prepared in one step by mixing the constituents with magnetic stirrer. The order of the addition of the constituents is not considered a critical factor for the preparation of micro emulsions, but it can influence the time required to obtain equilibrium.This time will increase if the co-surfactant is added to the organic phase, because its greater solubility in this phase will prevent the diffusion in the aqueous phase. This method is easier and much simpler then â€Å"supply of energy† method [25]. 1. 2. 3. 2 Process based on supply of energy: In this case, microemulsions are not obtained spontaneously. A decrease of the quantity of surfactants results in the use of high-pressure homogenizers in order to obtain the desired size of droplets that constitute the internal phase as opposed to the former technique [23].Benita and Levy [18] have studied the efficacy of various equipment for obtaining particles of different sizes. Two steps are required: the first step produces a coarse emulsion (0. 65 mm) by using a high-speed mixer. The second step consists of using a high pressure homogenizer. The dispersion of the oily phase in the aqueous phase is also facilitated by heating the phases before mixing them, the choice of the temperature depending on the sensitivity of the drug to heat.Cooling the preparation is required before its introduction in the high-pressure homogenizer, which can raise the temperature. A blue opalescent micro emulsion is obtained. 1. 2. 4 Review of literature: The microemulsion dosage form provided a delayed pharmacological action compared to the pharmacological action of regular eye drops. This observation led to the conclusion that the micro emulsion eye drops have a real advantage compared to regular eye drops which must be administered four times a day due to the short duration of the pharmacological action.According to Naveh et al. , it appeared that the retention of pilocarpine content in the internal oil phase, and the oil-water interface of the emulsion are sufficient to concomitantly enhance the ocular absorption of the drug through the cornea, and also increasing the corneal concentration of pilocarpine. After comparing the diffusion profiles of two microemulsions preparations and an aqueous solution of pilocarpine, Hasse and Keipert [29] s tudied their pharmacological effect in vivo by using six rabbits for each group.The obtained results were different from those observed in vitro. The two microemulsions provided a delayed release compared to the release of the drug incorporated in the aqueous solution. No experimental study has been conducted with microemulsions prepared by autoemulsification. However, several trials were conducted with microemulsions prepared by supply of energy. Melamed et al. [27] prepared micro emulsions containing adaprolol maleate. According to these authors, no ocular irritation was noticed in the group of forty healthy volunteers as opposed to regular eye droplets.The depressor effect was delayed; the intra-ocular pressure was still high 6 and 12 h after the instillation of the micro emulsion. A single instillation of microemulsion or corresponding placebo, namely microemulsion without any drug, was administered to twenty healthy volunteers. The determined parameters were the pupillary diame ter and variation of intra-ocular pressure. The effect of the micro emulsion which contains pilocarpine is obvious as compared to the placebo and was noticed within 1 h from instillation. The return to the initial values was noticed within 12 h [28,29]. Lv et al. 32] investigated micro emulsion systems composed of Span20/80, Tween20/80, n-butanol, H20, isopropyl palmitate (IPP)/isopropy lmyristate (IPM) as model systems of drug carriers for eye drops. The results showed that the stability of the chloramphenicol in the micro emulsion formulations was increased remarkably. Study of the effect of a single dose of atenolol 4% eye drops on 21 patients with primary open-angle glaucoma during a double-blind clinical trial. Monitoring of intraocular pressure (IOP), blood pressure, and pulse rate. At three and six h after medication, the average reduction of IOP was 7. and 4. 1 mm Hg respectively compared to the baseline readings without medication. The reduction of IOP at four h after medic ation was 6. 3 mm Hg compared to the pretreatment value. This corresponds to an average change from the pretreatment value of 22 percent. Blood pressure and pulse rate did not change significantly. We observed no subjective or objective ocular side effects. The duration of the effect of a single dose of Atenolol 4% eye drops is approximately six h. Atenolol 4% eye drops may become a useful agent in the medical treatment of glaucoma if a long-term effect and no ocular side effects [30]. . 3 Atenolol Atenolol is a selective ? 1 receptorantagonist, a drug belonging to the group of beta blockers (sometimes written ? -blockers), a class of drugs used primarily in cardiovascular diseases. Introduced in 1976, atenolol was developed as a replacement for propranolol in the treatment of hypertension. The chemical works by slowing down the heart and reducing its workload. Unlike propranolol, atenolol does not pass through the blood-brain barrier thus avoiding various central nervous system sid e effects. 25] Atenolol is one of the most widely used ? -blockers in the United Kingdom and was once the first-line treatment for hypertension. The role for ? -blockers in hypertension was downgraded in June 2006 in the United Kingdom to fourth-line, as they perform less appropriately or effectively than newer drugs, particularly in the elderly. Some evidence suggests that even in normal doses the most frequently used ? -blockers carry an unacceptable risk of provoking type 2 diabetes. Figure (3): Chemical structure of Atenolol [26]