LEE_İktisat Lisansüstü Programı - Doktora

Bu koleksiyon için kalıcı URI

Gözat

Son Başvurular

Şimdi gösteriliyor 1 - 5 / 7
  • Öge
    The digital divide in Turkey
    (Graduate School, 2023-04-05) Dalgıç Tetikol, Deniz Ece ; Güloğlu, Bülent ; 412162004 ; Economics
    Information and communication technologies (ICT), which broadly include telecommunications, mobile telephony, the Internet, and various Internet-enabled devices, have permeated most aspects of life by offering new and more efficient ways for people to communicate, access information, and learn. With applications in education, banking, e-commerce, health, and government services, among other areas, ICT are a major force behind economic growth and productivity, connecting people to essential services and jobs while enabling businesses to operate. The digital divide describes the uneven distribution of ICT in society. At a high level, the digital divide is the gap between those who have and do not have access to the Internet. However, the digital divide is not a binary but multifaceted and includes many factors such as connectivity, access to equipment, affordability, quality of service, motivation to use the technology, digital skills, and so on. Therefore, the digital divide encompasses various Internet-related challenges, which results in different types (levels) of digital inequality. The digital divide literature categorized those challenges as follows: differences in access (first-level digital divide), usage and digital skills (second level digital divide), and outcomes of Internet use (third level digital divide). These digital gaps exist at the international level as well as within a country. Often these gaps fall along other social inequalities in a country – that is, the different levels of the digital divide usually reflect the gaps between individuals from different demographic backgrounds and at different socioeconomic levels with regard to their opportunity and ability to access and utilize ICT. This thesis empirically examines the different aspects of the digital divide in Turkey. It explores the demographic and socioeconomic determinants of the first level and second level of digital divide in Turkey and analyzes the data that substantiates digital inequalities between years 2008-2020. Digital transformation and technological advancements in ICT offer tremendous opportunities for countries, especially for emerging economies. However, the full potential of digital advancements cannot be achieved by focusing solely on supply-side policies such as investing in infrastructure deployment. Despite increased Internet penetration rates, Turkey has failed to create a digitally inclusive society and risk missing out on the benefits of digitization. One of the leading factors contributing to this problem is the lack of effective demand-side solutions mitigating the digital divide. The results of this thesis suggest that significant disparities persist between different social groups in Turkey in terms of Internet access and Internet usage patterns. The findings of this thesis point out the target groups of first priority to address the first level and second level digital divide in Turkey. We can conclude from the results that special initiatives and programs are required to increase widespread adoption of the Internet. Those initiatives and programs should be designed and implemented with a participatory approach, targeting the high priority groups: women, older citizens, citizens with lower household income, citizens with low educational background, homemakers, and retired people, as well as citizens of the Northeast Anatolia, Central East Anatolia, Southeast Anatolia and West Black Sea regions of Turkey in particular, while considering the factors that make Internet access and use difficult.
  • Öge
    Network neutrality in the internet as a two sided market constituted by congestion sensitive end users and content providers
    (Graduate School, 2023-03-01) Kaplanlıoğlu, Özgür ; Güloğlu, Bülent ; Ecer, Sencer ; 412152004 ; Economics
    In this paper, I envision a two-sided market mediated by a monopolistic internet service provider, ISP. The ISP provides end-users internet access and carries content providers' (CPs') data packages on its network. I compare the case where network neutrality is strictly practiced with the case where the ISP can "throttle" the traffic of certain content providers. In the model, for simplicity, a single CP is exposed to throttling, while the other CPs, which are part of a continuum, are not. I then study the implications of the violation of network neutrality on total data consumption, congestion, and capacity investment. Under network neutrality, the decision variables of the ISP are end-user price and network bandwidth. I found that, in equilibrium, because of the monopolistic nature of the market is greater than the price under competitive equilibrium, and is lower than its socially optimum value. Thus, under neutrality, the ISP undersupplies both the capacity and the data. Under discrimination, the ISP is allowed to charge an access fee on unit bandwidth to one of the content providers (the discriminated CP). To reflect a scenario of great practical value, I choose the discriminated CP from one of the big OTTs such as YouTube, Instagram, Facebook, Netflix, etc. In this setting, to access the network, the discriminated CP needs to buy bandwidth from the ISP. However, the bandwidth bought by the discriminated CP is not for exclusive usage of the discriminated CP. It rather acts as an upper bandwidth limit for the discriminated CP. Under discrimination, the decision variables of the ISP are the end-user price, the network bandwidth, and the bandwidth price. I found that when allowed the ISP always prefers to deviate from network neutrality by charging a positive price for bandwidth. Also, the ISP sets just enough to keep the discriminated CP in the market. Comparing the equilibrium outcomes, I show that under discrimination, the ISP charges a lower price to end-users. However, the discrimination also leads to less network bandwidth installed. Both the lower end-user price and the lower network bandwidth contribute to the congestion. Thus, under discrimination, the congestion is higher than under neutrality. Considering its adverse effects on the network bandwidth and congestion, although the end-user price is lower under discrimination, I recommend that the network neutrality principle should not be abolished.
  • Öge
    Origin and destination based demand of continuous pricing for airline revenue management
    (Graduate School, 2023-08-31) Değirmenci, Mehmet Melih ; Aydemir, Resul ; 412162005 ; Economics
    This thesis presents an approach for estimating sell-up rates, which indicate a passenger's likelihood to upgrade to a higher fare class, based on historical booking data categorized by fare classes. Previous models explored in the literature, including Direct Observation (DO) and Inverse Cumulative (IC), have demonstrated limitations when applied to real-world historical booking data, as their outcomes may not align with the desired business expectations. To address this limitation, we enhance these models by incorporating data pre-processing techniques and devising solution strategies that provided to the needs of industry practitioners when dealing with historical booking data. By incorporating fare class availability data and adjusting past bookings accordingly, our proposed model offers a robust estimation of sell-up rates. To validate the effectiveness of our approach, we conduct an analysis using data from a major European airline. The numerical results demonstrate a significant decrease in the Mean Absolute Percentage Error (MAPE) when employing our proposed method, indicating its superior accuracy in estimating sell-up rates. This research not only fills the gap in the existing literature but also provides practical implications for revenue management practitioners. By refining the sell-up rate estimation process and addressing the shortcomings of traditional models, our approach offers a valuable tool for airlines to optimize their revenue management strategies. The utilization of historical booking data, combined with our model's enhancements, ensures more reliable and actionable insights, empowering practitioners to make informed decisions. Furthermore, our study contributes to the field by introducing data pre-processing techniques tailored specifically for historical booking data analysis. These techniques facilitate the extraction of relevant information and enhance the accuracy of sell-up rate estimations. As such, our research provides a comprehensive framework that encompasses both theoretical advancements and practical applications, thus offering a holistic approach to addressing the challenges of sell-up rate estimation in revenue management. In summary, the first chapter introduces a new method for estimating sell-up rates by leveraging fare class-based historical booking data. Through the refinement of existing models, along with the incorporation of data pre-processing techniques and solution strategies, our approach yields more accurate sell-up rate estimations. The analysis of data from a major European airline demonstrates the effectiveness of our proposed method in reducing the Mean Absolute Percentage Error (MAPE). By enhancing sell-up rate estimation accuracy, our research contributes to the advancement of revenue management practices and provides valuable insights for industry practitioners. In the second chapter, we present an innovative model for forecasting airline flight load factors specifically designed to account for the unique circumstances brought about by the Covid-19 pandemic. By incorporating various variables, including bookings, capacity, booking trends, and seasonal effects, our model aims to provide accurate load factor predictions. To validate its effectiveness, we conducted an extensive analysis using the dataset of one of Europe's largest network airlines, spanning the entirety of 2021. The findings of our study reveal that machine learning algorithms offer substantial improvements in load factor predictions compared to the traditional pickup method. Notably, the Covid-19 pandemic period introduces distinctive patterns and challenges to airline load factor data, leading to decreased performance of the pickup method. However, by leveraging advanced machine learning models, we were able to effectively capture the complexities and variations in load factors during this turbulent period, resulting in significantly enhanced accuracy. Our proposed model demonstrates a remarkable reduction in the Mean Absolute Error (MAE) score for load factor forecasts. When compared to the pickup method, the MAE score decreased from 7.94 to an impressive 1.99. These results underscore the potential of advanced machine learning techniques in accurately predicting load factors, particularly in the face of unprecedented disruptions like the Covid-19 pandemic. The incorporation of diverse variables into our model enables a comprehensive assessment of the factors influencing load factor dynamics. By considering variables such as bookings, capacity, booking trends, and seasonal effects, our model captures the intricate interplay between these factors and load factor performance. This complete approach enhances the accuracy and reliability of load factor forecasts, providing airlines with valuable insights for informed decision-making. The outcomes of this research highlight the significance of leveraging advanced machine learning techniques for load factor forecasting during challenging periods like the Covid-19 pandemic. The ability to effectively capture and analyze complex data patterns empowers airlines to adapt their strategies and optimize resource allocation in response to changing demand dynamics. By embracing the potential of machine learning, airlines can gain a competitive edge and make data-driven decisions to navigate through turbulent times successfully. In conclusion, this chapter introduces a different model for forecasting airline flight load factors, specifically tailored to the unique circumstances presented by the Covid-19 pandemic. Through the utilization of machine learning algorithms and the incorporation of various variables, our model surpasses the traditional pickup method in terms of accuracy. The significant reduction in the Mean Absolute Error (MAE) score demonstrates the efficacy of our proposed model in capturing the complexities and variations in load factor data during the pandemic. By providing more accurate load factor forecasts, our research equips airlines with valuable insights to optimize their operations and navigate through challenging times effectively.
  • Öge
    A Dutch disease approach into the premature deindustrialization
    (Graduate School, 2022-08-18) Çakır, Muhammet Sait ; Aydemir, Resul ; 412142006 ; Economics
    We explore the main causes and consequences of the premature deindustrialization phenomena. We argue that local currency overvaluations mainly associated with a surge in capital inflows into the emerging market economies following the deregulation of their capital accounts severely hurt the output share of manufacturing industry. First, we empirically establish a causal link from capital flows to local overvaluations. According to the two-way error component model which controls for the full set of country and time fixed effects, a surge in capital flows by one standard deviation is associated with an overvaluation of 1.67 percent. To address the possible endogeneity between capital flows and real exchange rate, we run two-variate first-order panel vector autoregressive model since the feedback effects from overvaluation to net financial inflows might introduce a bias into the fixed effect estimation. When we isolate the effect of positive capital inflow shock of one standard deviation by the Cholesky decomposition, we find that it is statistically significantly associated with an immediate overvaluation in real terms with 95 percent confidence level. Then we construct our baseline regression model. Applying the second generation estimators allowing for cross-section dependency (Augmented Mean Group and Common Correlated Effects Mean Group), we run a panel data regression model based on a sample of 39 developing countries in Latin America, Sub-Saharan Africa, East Asia, North America, and Europe from 1960 to 2017. We find that an overvaluation of 50 percent which corresponds approximately to one and half standard deviations is associated with a contraction of manufacturing output share as high as 1,25 percent over the five year period. With the turn of new century, the developing countries also experienced a massive deindustrialization by shedding manufacturing value-added as large as 1.24% of national income. Moreover, the evidence suggests that the relationship between real exchange misalignments and the manufacturing share in output might be nonlinear so that the manufacturing competencies which have been eroded by local currency overvaluations in real terms cannot simply be brought back during the undervaluation periods. We also show that the baseline regression results are robust to different data sets, alternative real exchange rate/deindustrialization measurements, and dynamic model specifications which allow us to treat the real exchange rate as endogenous variable to address any potential concern regarding the simultaneity bias. As a further robustness check on our findings, we empirically examine the effects of supply chain disruptions, inequality shocks, and institutional innovations on the path of industrialization in developing countries by running a panel vector autoregressive model. We found that deterioration in income distribution unequivocally harms the developing countries' bid for industrialization while better institutions proxied by an improvement regulatory quality invariably foster it. On the other hand, the effects of supply chain disruptions on the pace of industrialization follow a nonlinear path, showing the great resilience of local industries in absorbing imported input bottlenecks through intermediate input import substitution. We also provide evidence that backward participation into GVCs and regulatory quality do not mutually Granger-cause each other, and suggest that the well-established link from better governance to GVCs may be missing in the developing country case. Based on these empirical findings, the need for a comprehensive industrial policy along with a firm use of capital controls and macroprudential measures given a robust institutional framework comes out as the main policy implication of our work, and they are duly discussed in light of recent developments in the literature.
  • Öge
    Net neutrality in oligopolistic models of content provision and internet service provision markets
    (Graduate School, 2022-09-13) Erkul, Turgut ; Ecer, Sencer ; 412152005 ; Economy
    Importance of telecommunications in all societies and all industries is growing tremendously. From entertainment to even the most basic needs such as ordering potable water right at our doorsteps, we rely on the telecommunication networks to provide us the means. Behind the scenes there is a complex mesh of advanced technology with an evolving market interaction of Content Providers (CPs) and Internet Service Providers (ISPs) racing to profit from the end-users (EUs). National Telecommunications Regulatory Authorities (NTRA) in each country regulate the market to maximize the total welfare. Net Neutrality (NN) is the mechanism that is implemented and safeguarded by the NTRAs that protects against discrimination of data. As a principle, NN advocates that all data has been created equal and shall not be throttled, discarded, de-prioritized or charged differently than any other data. Furthermore, NN prevents ISPs asking for termination fees from CPs to give them access to the EUs. Content Providers (CPs) seem to be generally pro-NN and ISPs seem to be against NN, likely because of the relative inelasticity of end-user demand for ISPs compared to CPs, which is reflected in the joint demand structure as I model in this dissertation. Latest academic articles have focused on the successive monopoly or successive oligopoly models in vertically related markets to explain the dynamics of the CP, ISP and end-user interaction. In these models, upstream is the CP (e.g., Netflix, BluTV), downstream is the ISP (e.g., Comcast, TTNet). In early models, CPs and ISPs are assumed to be perfect complements. Therefore, the termination fee that the CP pays to the ISP becomes irrelevant, and hence does not impact the prices to the end-user or the total welfare. This result is not consistent with what we observe in the industry, like the case between South Korea Broadband and Netflix (Bae et al., 2021). Indeed, there is mounting pressure from ISPs to allow these payments, which means these fees are not irrelevant. My conclusion is that the perfect complementarity assumption is inappropriate to explain the industry. In my model, I introduce imperfect complementarity, which releases the constraint that the quantity of CP demanded, and the quantity ISP demanded to be equal, and I show that introducing a non-zero termination fee may indeed increase total welfare. Therefore, we recommend NTRAs to consider termination fee as a leverage to maximize the social welfare within each country. Furthermore, I show that the need for net neutrality depends on the level of complementarity and own price effects of the ISP and the CP relative to each other.